Jan 20 13:23:04 np0005589310 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 20 13:23:04 np0005589310 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 20 13:23:04 np0005589310 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:23:04 np0005589310 kernel: BIOS-provided physical RAM map:
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 20 13:23:04 np0005589310 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 20 13:23:04 np0005589310 kernel: NX (Execute Disable) protection: active
Jan 20 13:23:04 np0005589310 kernel: APIC: Static calls initialized
Jan 20 13:23:04 np0005589310 kernel: SMBIOS 2.8 present.
Jan 20 13:23:04 np0005589310 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 20 13:23:04 np0005589310 kernel: Hypervisor detected: KVM
Jan 20 13:23:04 np0005589310 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 20 13:23:04 np0005589310 kernel: kvm-clock: using sched offset of 3128454352 cycles
Jan 20 13:23:04 np0005589310 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 20 13:23:04 np0005589310 kernel: tsc: Detected 2799.998 MHz processor
Jan 20 13:23:04 np0005589310 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 20 13:23:04 np0005589310 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 20 13:23:04 np0005589310 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 20 13:23:04 np0005589310 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 20 13:23:04 np0005589310 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 20 13:23:04 np0005589310 kernel: Using GB pages for direct mapping
Jan 20 13:23:04 np0005589310 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 20 13:23:04 np0005589310 kernel: ACPI: Early table checksum verification disabled
Jan 20 13:23:04 np0005589310 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 20 13:23:04 np0005589310 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:23:04 np0005589310 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:23:04 np0005589310 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:23:04 np0005589310 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 20 13:23:04 np0005589310 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:23:04 np0005589310 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 13:23:04 np0005589310 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 20 13:23:04 np0005589310 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 20 13:23:04 np0005589310 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 20 13:23:04 np0005589310 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 20 13:23:04 np0005589310 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 20 13:23:04 np0005589310 kernel: No NUMA configuration found
Jan 20 13:23:04 np0005589310 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 20 13:23:04 np0005589310 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 20 13:23:04 np0005589310 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 20 13:23:04 np0005589310 kernel: Zone ranges:
Jan 20 13:23:04 np0005589310 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 20 13:23:04 np0005589310 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 20 13:23:04 np0005589310 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:23:04 np0005589310 kernel:  Device   empty
Jan 20 13:23:04 np0005589310 kernel: Movable zone start for each node
Jan 20 13:23:04 np0005589310 kernel: Early memory node ranges
Jan 20 13:23:04 np0005589310 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 20 13:23:04 np0005589310 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 20 13:23:04 np0005589310 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 13:23:04 np0005589310 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 20 13:23:04 np0005589310 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 20 13:23:04 np0005589310 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 20 13:23:04 np0005589310 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 20 13:23:04 np0005589310 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 20 13:23:04 np0005589310 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 20 13:23:04 np0005589310 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 20 13:23:04 np0005589310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 20 13:23:04 np0005589310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 20 13:23:04 np0005589310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 20 13:23:04 np0005589310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 20 13:23:04 np0005589310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 20 13:23:04 np0005589310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 20 13:23:04 np0005589310 kernel: TSC deadline timer available
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Max. logical packages:   8
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Max. logical dies:       8
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Max. dies per package:   1
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Max. threads per core:   1
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Num. cores per package:     1
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Num. threads per package:   1
Jan 20 13:23:04 np0005589310 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 20 13:23:04 np0005589310 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 20 13:23:04 np0005589310 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 20 13:23:04 np0005589310 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 20 13:23:04 np0005589310 kernel: Booting paravirtualized kernel on KVM
Jan 20 13:23:04 np0005589310 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 20 13:23:04 np0005589310 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 20 13:23:04 np0005589310 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 20 13:23:04 np0005589310 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 20 13:23:04 np0005589310 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:23:04 np0005589310 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 20 13:23:04 np0005589310 kernel: random: crng init done
Jan 20 13:23:04 np0005589310 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: Fallback order for Node 0: 0 
Jan 20 13:23:04 np0005589310 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 20 13:23:04 np0005589310 kernel: Policy zone: Normal
Jan 20 13:23:04 np0005589310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 20 13:23:04 np0005589310 kernel: software IO TLB: area num 8.
Jan 20 13:23:04 np0005589310 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 20 13:23:04 np0005589310 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 20 13:23:04 np0005589310 kernel: ftrace: allocated 194 pages with 3 groups
Jan 20 13:23:04 np0005589310 kernel: Dynamic Preempt: voluntary
Jan 20 13:23:04 np0005589310 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 20 13:23:04 np0005589310 kernel: rcu: #011RCU event tracing is enabled.
Jan 20 13:23:04 np0005589310 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 20 13:23:04 np0005589310 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 20 13:23:04 np0005589310 kernel: #011Rude variant of Tasks RCU enabled.
Jan 20 13:23:04 np0005589310 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 20 13:23:04 np0005589310 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 20 13:23:04 np0005589310 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 20 13:23:04 np0005589310 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:23:04 np0005589310 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:23:04 np0005589310 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 13:23:04 np0005589310 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 20 13:23:04 np0005589310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 20 13:23:04 np0005589310 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 20 13:23:04 np0005589310 kernel: Console: colour VGA+ 80x25
Jan 20 13:23:04 np0005589310 kernel: printk: console [ttyS0] enabled
Jan 20 13:23:04 np0005589310 kernel: ACPI: Core revision 20230331
Jan 20 13:23:04 np0005589310 kernel: APIC: Switch to symmetric I/O mode setup
Jan 20 13:23:04 np0005589310 kernel: x2apic enabled
Jan 20 13:23:04 np0005589310 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 20 13:23:04 np0005589310 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 20 13:23:04 np0005589310 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 20 13:23:04 np0005589310 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 20 13:23:04 np0005589310 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 20 13:23:04 np0005589310 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 20 13:23:04 np0005589310 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 20 13:23:04 np0005589310 kernel: Spectre V2 : Mitigation: Retpolines
Jan 20 13:23:04 np0005589310 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 20 13:23:04 np0005589310 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 20 13:23:04 np0005589310 kernel: RETBleed: Mitigation: untrained return thunk
Jan 20 13:23:04 np0005589310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 20 13:23:04 np0005589310 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 20 13:23:04 np0005589310 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 20 13:23:04 np0005589310 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 20 13:23:04 np0005589310 kernel: x86/bugs: return thunk changed
Jan 20 13:23:04 np0005589310 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 20 13:23:04 np0005589310 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 20 13:23:04 np0005589310 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 20 13:23:04 np0005589310 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 20 13:23:04 np0005589310 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 20 13:23:04 np0005589310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 20 13:23:04 np0005589310 kernel: Freeing SMP alternatives memory: 40K
Jan 20 13:23:04 np0005589310 kernel: pid_max: default: 32768 minimum: 301
Jan 20 13:23:04 np0005589310 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 20 13:23:04 np0005589310 kernel: landlock: Up and running.
Jan 20 13:23:04 np0005589310 kernel: Yama: becoming mindful.
Jan 20 13:23:04 np0005589310 kernel: SELinux:  Initializing.
Jan 20 13:23:04 np0005589310 kernel: LSM support for eBPF active
Jan 20 13:23:04 np0005589310 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 20 13:23:04 np0005589310 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 20 13:23:04 np0005589310 kernel: ... version:                0
Jan 20 13:23:04 np0005589310 kernel: ... bit width:              48
Jan 20 13:23:04 np0005589310 kernel: ... generic registers:      6
Jan 20 13:23:04 np0005589310 kernel: ... value mask:             0000ffffffffffff
Jan 20 13:23:04 np0005589310 kernel: ... max period:             00007fffffffffff
Jan 20 13:23:04 np0005589310 kernel: ... fixed-purpose events:   0
Jan 20 13:23:04 np0005589310 kernel: ... event mask:             000000000000003f
Jan 20 13:23:04 np0005589310 kernel: signal: max sigframe size: 1776
Jan 20 13:23:04 np0005589310 kernel: rcu: Hierarchical SRCU implementation.
Jan 20 13:23:04 np0005589310 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 20 13:23:04 np0005589310 kernel: smp: Bringing up secondary CPUs ...
Jan 20 13:23:04 np0005589310 kernel: smpboot: x86: Booting SMP configuration:
Jan 20 13:23:04 np0005589310 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 20 13:23:04 np0005589310 kernel: smp: Brought up 1 node, 8 CPUs
Jan 20 13:23:04 np0005589310 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 20 13:23:04 np0005589310 kernel: node 0 deferred pages initialised in 7ms
Jan 20 13:23:04 np0005589310 kernel: Memory: 7763768K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 20 13:23:04 np0005589310 kernel: devtmpfs: initialized
Jan 20 13:23:04 np0005589310 kernel: x86/mm: Memory block size: 128MB
Jan 20 13:23:04 np0005589310 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 20 13:23:04 np0005589310 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 20 13:23:04 np0005589310 kernel: pinctrl core: initialized pinctrl subsystem
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 20 13:23:04 np0005589310 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 20 13:23:04 np0005589310 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 20 13:23:04 np0005589310 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 20 13:23:04 np0005589310 kernel: audit: initializing netlink subsys (disabled)
Jan 20 13:23:04 np0005589310 kernel: audit: type=2000 audit(1768933382.877:1): state=initialized audit_enabled=0 res=1
Jan 20 13:23:04 np0005589310 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 20 13:23:04 np0005589310 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 20 13:23:04 np0005589310 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 20 13:23:04 np0005589310 kernel: cpuidle: using governor menu
Jan 20 13:23:04 np0005589310 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 20 13:23:04 np0005589310 kernel: PCI: Using configuration type 1 for base access
Jan 20 13:23:04 np0005589310 kernel: PCI: Using configuration type 1 for extended access
Jan 20 13:23:04 np0005589310 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 20 13:23:04 np0005589310 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 20 13:23:04 np0005589310 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 20 13:23:04 np0005589310 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 20 13:23:04 np0005589310 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 20 13:23:04 np0005589310 kernel: Demotion targets for Node 0: null
Jan 20 13:23:04 np0005589310 kernel: cryptd: max_cpu_qlen set to 1000
Jan 20 13:23:04 np0005589310 kernel: ACPI: Added _OSI(Module Device)
Jan 20 13:23:04 np0005589310 kernel: ACPI: Added _OSI(Processor Device)
Jan 20 13:23:04 np0005589310 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 20 13:23:04 np0005589310 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 20 13:23:04 np0005589310 kernel: ACPI: Interpreter enabled
Jan 20 13:23:04 np0005589310 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 20 13:23:04 np0005589310 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 20 13:23:04 np0005589310 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 20 13:23:04 np0005589310 kernel: PCI: Using E820 reservations for host bridge windows
Jan 20 13:23:04 np0005589310 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 20 13:23:04 np0005589310 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [3] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [4] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [5] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [6] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [7] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [8] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [9] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [10] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [11] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [12] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [13] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [14] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [15] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [16] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [17] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [18] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [19] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [20] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [21] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [22] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [23] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [24] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [25] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [26] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [27] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [28] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [29] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [30] registered
Jan 20 13:23:04 np0005589310 kernel: acpiphp: Slot [31] registered
Jan 20 13:23:04 np0005589310 kernel: PCI host bridge to bus 0000:00
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 20 13:23:04 np0005589310 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 20 13:23:04 np0005589310 kernel: iommu: Default domain type: Translated
Jan 20 13:23:04 np0005589310 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 20 13:23:04 np0005589310 kernel: SCSI subsystem initialized
Jan 20 13:23:04 np0005589310 kernel: ACPI: bus type USB registered
Jan 20 13:23:04 np0005589310 kernel: usbcore: registered new interface driver usbfs
Jan 20 13:23:04 np0005589310 kernel: usbcore: registered new interface driver hub
Jan 20 13:23:04 np0005589310 kernel: usbcore: registered new device driver usb
Jan 20 13:23:04 np0005589310 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 20 13:23:04 np0005589310 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 20 13:23:04 np0005589310 kernel: PTP clock support registered
Jan 20 13:23:04 np0005589310 kernel: EDAC MC: Ver: 3.0.0
Jan 20 13:23:04 np0005589310 kernel: NetLabel: Initializing
Jan 20 13:23:04 np0005589310 kernel: NetLabel:  domain hash size = 128
Jan 20 13:23:04 np0005589310 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 20 13:23:04 np0005589310 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 20 13:23:04 np0005589310 kernel: PCI: Using ACPI for IRQ routing
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 20 13:23:04 np0005589310 kernel: vgaarb: loaded
Jan 20 13:23:04 np0005589310 kernel: clocksource: Switched to clocksource kvm-clock
Jan 20 13:23:04 np0005589310 kernel: VFS: Disk quotas dquot_6.6.0
Jan 20 13:23:04 np0005589310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 20 13:23:04 np0005589310 kernel: pnp: PnP ACPI init
Jan 20 13:23:04 np0005589310 kernel: pnp: PnP ACPI: found 5 devices
Jan 20 13:23:04 np0005589310 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_INET protocol family
Jan 20 13:23:04 np0005589310 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 20 13:23:04 np0005589310 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_XDP protocol family
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 20 13:23:04 np0005589310 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 20 13:23:04 np0005589310 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 20 13:23:04 np0005589310 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80097 usecs
Jan 20 13:23:04 np0005589310 kernel: PCI: CLS 0 bytes, default 64
Jan 20 13:23:04 np0005589310 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 20 13:23:04 np0005589310 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 20 13:23:04 np0005589310 kernel: ACPI: bus type thunderbolt registered
Jan 20 13:23:04 np0005589310 kernel: Trying to unpack rootfs image as initramfs...
Jan 20 13:23:04 np0005589310 kernel: Initialise system trusted keyrings
Jan 20 13:23:04 np0005589310 kernel: Key type blacklist registered
Jan 20 13:23:04 np0005589310 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 20 13:23:04 np0005589310 kernel: zbud: loaded
Jan 20 13:23:04 np0005589310 kernel: integrity: Platform Keyring initialized
Jan 20 13:23:04 np0005589310 kernel: integrity: Machine keyring initialized
Jan 20 13:23:04 np0005589310 kernel: Freeing initrd memory: 87956K
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_ALG protocol family
Jan 20 13:23:04 np0005589310 kernel: xor: automatically using best checksumming function   avx       
Jan 20 13:23:04 np0005589310 kernel: Key type asymmetric registered
Jan 20 13:23:04 np0005589310 kernel: Asymmetric key parser 'x509' registered
Jan 20 13:23:04 np0005589310 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 20 13:23:04 np0005589310 kernel: io scheduler mq-deadline registered
Jan 20 13:23:04 np0005589310 kernel: io scheduler kyber registered
Jan 20 13:23:04 np0005589310 kernel: io scheduler bfq registered
Jan 20 13:23:04 np0005589310 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 20 13:23:04 np0005589310 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 20 13:23:04 np0005589310 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 20 13:23:04 np0005589310 kernel: ACPI: button: Power Button [PWRF]
Jan 20 13:23:04 np0005589310 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 20 13:23:04 np0005589310 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 20 13:23:04 np0005589310 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 20 13:23:04 np0005589310 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 20 13:23:04 np0005589310 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 20 13:23:04 np0005589310 kernel: Non-volatile memory driver v1.3
Jan 20 13:23:04 np0005589310 kernel: rdac: device handler registered
Jan 20 13:23:04 np0005589310 kernel: hp_sw: device handler registered
Jan 20 13:23:04 np0005589310 kernel: emc: device handler registered
Jan 20 13:23:04 np0005589310 kernel: alua: device handler registered
Jan 20 13:23:04 np0005589310 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 20 13:23:04 np0005589310 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 20 13:23:04 np0005589310 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 20 13:23:04 np0005589310 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 20 13:23:04 np0005589310 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 20 13:23:04 np0005589310 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 20 13:23:04 np0005589310 kernel: usb usb1: Product: UHCI Host Controller
Jan 20 13:23:04 np0005589310 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 20 13:23:04 np0005589310 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 20 13:23:04 np0005589310 kernel: hub 1-0:1.0: USB hub found
Jan 20 13:23:04 np0005589310 kernel: hub 1-0:1.0: 2 ports detected
Jan 20 13:23:04 np0005589310 kernel: usbcore: registered new interface driver usbserial_generic
Jan 20 13:23:04 np0005589310 kernel: usbserial: USB Serial support registered for generic
Jan 20 13:23:04 np0005589310 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 20 13:23:04 np0005589310 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 20 13:23:04 np0005589310 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 20 13:23:04 np0005589310 kernel: mousedev: PS/2 mouse device common for all mice
Jan 20 13:23:04 np0005589310 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 20 13:23:04 np0005589310 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 20 13:23:04 np0005589310 kernel: rtc_cmos 00:04: registered as rtc0
Jan 20 13:23:04 np0005589310 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T18:23:03 UTC (1768933383)
Jan 20 13:23:04 np0005589310 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 20 13:23:04 np0005589310 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 20 13:23:04 np0005589310 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 20 13:23:04 np0005589310 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 20 13:23:04 np0005589310 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 20 13:23:04 np0005589310 kernel: usbcore: registered new interface driver usbhid
Jan 20 13:23:04 np0005589310 kernel: usbhid: USB HID core driver
Jan 20 13:23:04 np0005589310 kernel: drop_monitor: Initializing network drop monitor service
Jan 20 13:23:04 np0005589310 kernel: Initializing XFRM netlink socket
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_INET6 protocol family
Jan 20 13:23:04 np0005589310 kernel: Segment Routing with IPv6
Jan 20 13:23:04 np0005589310 kernel: NET: Registered PF_PACKET protocol family
Jan 20 13:23:04 np0005589310 kernel: mpls_gso: MPLS GSO support
Jan 20 13:23:04 np0005589310 kernel: IPI shorthand broadcast: enabled
Jan 20 13:23:04 np0005589310 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 20 13:23:04 np0005589310 kernel: AES CTR mode by8 optimization enabled
Jan 20 13:23:04 np0005589310 kernel: sched_clock: Marking stable (1164004892, 146936175)->(1450214906, -139273839)
Jan 20 13:23:04 np0005589310 kernel: registered taskstats version 1
Jan 20 13:23:04 np0005589310 kernel: Loading compiled-in X.509 certificates
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 20 13:23:04 np0005589310 kernel: Demotion targets for Node 0: null
Jan 20 13:23:04 np0005589310 kernel: page_owner is disabled
Jan 20 13:23:04 np0005589310 kernel: Key type .fscrypt registered
Jan 20 13:23:04 np0005589310 kernel: Key type fscrypt-provisioning registered
Jan 20 13:23:04 np0005589310 kernel: Key type big_key registered
Jan 20 13:23:04 np0005589310 kernel: Key type encrypted registered
Jan 20 13:23:04 np0005589310 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 20 13:23:04 np0005589310 kernel: Loading compiled-in module X.509 certificates
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 13:23:04 np0005589310 kernel: ima: Allocated hash algorithm: sha256
Jan 20 13:23:04 np0005589310 kernel: ima: No architecture policies found
Jan 20 13:23:04 np0005589310 kernel: evm: Initialising EVM extended attributes:
Jan 20 13:23:04 np0005589310 kernel: evm: security.selinux
Jan 20 13:23:04 np0005589310 kernel: evm: security.SMACK64 (disabled)
Jan 20 13:23:04 np0005589310 kernel: evm: security.SMACK64EXEC (disabled)
Jan 20 13:23:04 np0005589310 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 20 13:23:04 np0005589310 kernel: evm: security.SMACK64MMAP (disabled)
Jan 20 13:23:04 np0005589310 kernel: evm: security.apparmor (disabled)
Jan 20 13:23:04 np0005589310 kernel: evm: security.ima
Jan 20 13:23:04 np0005589310 kernel: evm: security.capability
Jan 20 13:23:04 np0005589310 kernel: evm: HMAC attrs: 0x1
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 20 13:23:04 np0005589310 kernel: Running certificate verification RSA selftest
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 20 13:23:04 np0005589310 kernel: Running certificate verification ECDSA selftest
Jan 20 13:23:04 np0005589310 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 20 13:23:04 np0005589310 kernel: clk: Disabling unused clocks
Jan 20 13:23:04 np0005589310 kernel: Freeing unused decrypted memory: 2028K
Jan 20 13:23:04 np0005589310 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 20 13:23:04 np0005589310 kernel: Write protecting the kernel read-only data: 30720k
Jan 20 13:23:04 np0005589310 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: Manufacturer: QEMU
Jan 20 13:23:04 np0005589310 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 20 13:23:04 np0005589310 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 20 13:23:04 np0005589310 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 20 13:23:04 np0005589310 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 20 13:23:04 np0005589310 kernel: Run /init as init process
Jan 20 13:23:04 np0005589310 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:23:04 np0005589310 systemd: Detected virtualization kvm.
Jan 20 13:23:04 np0005589310 systemd: Detected architecture x86-64.
Jan 20 13:23:04 np0005589310 systemd: Running in initrd.
Jan 20 13:23:04 np0005589310 systemd: No hostname configured, using default hostname.
Jan 20 13:23:04 np0005589310 systemd: Hostname set to <localhost>.
Jan 20 13:23:04 np0005589310 systemd: Initializing machine ID from VM UUID.
Jan 20 13:23:04 np0005589310 systemd: Queued start job for default target Initrd Default Target.
Jan 20 13:23:04 np0005589310 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:23:04 np0005589310 systemd: Reached target Local Encrypted Volumes.
Jan 20 13:23:04 np0005589310 systemd: Reached target Initrd /usr File System.
Jan 20 13:23:04 np0005589310 systemd: Reached target Local File Systems.
Jan 20 13:23:04 np0005589310 systemd: Reached target Path Units.
Jan 20 13:23:04 np0005589310 systemd: Reached target Slice Units.
Jan 20 13:23:04 np0005589310 systemd: Reached target Swaps.
Jan 20 13:23:04 np0005589310 systemd: Reached target Timer Units.
Jan 20 13:23:04 np0005589310 systemd: Listening on D-Bus System Message Bus Socket.
Jan 20 13:23:04 np0005589310 systemd: Listening on Journal Socket (/dev/log).
Jan 20 13:23:04 np0005589310 systemd: Listening on Journal Socket.
Jan 20 13:23:04 np0005589310 systemd: Listening on udev Control Socket.
Jan 20 13:23:04 np0005589310 systemd: Listening on udev Kernel Socket.
Jan 20 13:23:04 np0005589310 systemd: Reached target Socket Units.
Jan 20 13:23:04 np0005589310 systemd: Starting Create List of Static Device Nodes...
Jan 20 13:23:04 np0005589310 systemd: Starting Journal Service...
Jan 20 13:23:04 np0005589310 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:23:04 np0005589310 systemd: Starting Apply Kernel Variables...
Jan 20 13:23:04 np0005589310 systemd: Starting Create System Users...
Jan 20 13:23:04 np0005589310 systemd: Starting Setup Virtual Console...
Jan 20 13:23:04 np0005589310 systemd: Finished Create List of Static Device Nodes.
Jan 20 13:23:04 np0005589310 systemd: Finished Apply Kernel Variables.
Jan 20 13:23:04 np0005589310 systemd-journald[309]: Journal started
Jan 20 13:23:04 np0005589310 systemd-journald[309]: Runtime Journal (/run/log/journal/6fed1acbe03a42468d491248ad1fe57b) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:23:04 np0005589310 systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 20 13:23:04 np0005589310 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 20 13:23:04 np0005589310 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 20 13:23:04 np0005589310 systemd: Started Journal Service.
Jan 20 13:23:04 np0005589310 systemd[1]: Finished Create System Users.
Jan 20 13:23:04 np0005589310 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:23:04 np0005589310 systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:23:04 np0005589310 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:23:04 np0005589310 systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:23:04 np0005589310 systemd[1]: Finished Setup Virtual Console.
Jan 20 13:23:04 np0005589310 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 20 13:23:04 np0005589310 systemd[1]: Starting dracut cmdline hook...
Jan 20 13:23:04 np0005589310 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 20 13:23:04 np0005589310 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 13:23:04 np0005589310 systemd[1]: Finished dracut cmdline hook.
Jan 20 13:23:04 np0005589310 systemd[1]: Starting dracut pre-udev hook...
Jan 20 13:23:04 np0005589310 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 20 13:23:04 np0005589310 kernel: device-mapper: uevent: version 1.0.3
Jan 20 13:23:04 np0005589310 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 20 13:23:04 np0005589310 kernel: RPC: Registered named UNIX socket transport module.
Jan 20 13:23:04 np0005589310 kernel: RPC: Registered udp transport module.
Jan 20 13:23:04 np0005589310 kernel: RPC: Registered tcp transport module.
Jan 20 13:23:04 np0005589310 kernel: RPC: Registered tcp-with-tls transport module.
Jan 20 13:23:04 np0005589310 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 20 13:23:04 np0005589310 rpc.statd[443]: Version 2.5.4 starting
Jan 20 13:23:05 np0005589310 rpc.statd[443]: Initializing NSM state
Jan 20 13:23:05 np0005589310 rpc.idmapd[448]: Setting log level to 0
Jan 20 13:23:05 np0005589310 systemd[1]: Finished dracut pre-udev hook.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:23:05 np0005589310 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:23:05 np0005589310 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting dracut pre-trigger hook...
Jan 20 13:23:05 np0005589310 systemd[1]: Finished dracut pre-trigger hook.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting Coldplug All udev Devices...
Jan 20 13:23:05 np0005589310 systemd[1]: Created slice Slice /system/modprobe.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:23:05 np0005589310 systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:23:05 np0005589310 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Network.
Jan 20 13:23:05 np0005589310 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 13:23:05 np0005589310 systemd[1]: Starting dracut initqueue hook...
Jan 20 13:23:05 np0005589310 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:23:05 np0005589310 systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:23:05 np0005589310 systemd[1]: Mounting Kernel Configuration File System...
Jan 20 13:23:05 np0005589310 systemd[1]: Mounted Kernel Configuration File System.
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target System Initialization.
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Basic System.
Jan 20 13:23:05 np0005589310 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 20 13:23:05 np0005589310 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 20 13:23:05 np0005589310 kernel: scsi host0: ata_piix
Jan 20 13:23:05 np0005589310 kernel: scsi host1: ata_piix
Jan 20 13:23:05 np0005589310 kernel: vda: vda1
Jan 20 13:23:05 np0005589310 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 20 13:23:05 np0005589310 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 20 13:23:05 np0005589310 kernel: ata1: found unknown device (class 0)
Jan 20 13:23:05 np0005589310 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 20 13:23:05 np0005589310 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 20 13:23:05 np0005589310 systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:23:05 np0005589310 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 20 13:23:05 np0005589310 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:23:05 np0005589310 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 20 13:23:05 np0005589310 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Initrd Root Device.
Jan 20 13:23:05 np0005589310 systemd[1]: Finished dracut initqueue hook.
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 20 13:23:05 np0005589310 systemd[1]: Reached target Remote File Systems.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting dracut pre-mount hook...
Jan 20 13:23:05 np0005589310 systemd[1]: Finished dracut pre-mount hook.
Jan 20 13:23:05 np0005589310 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 20 13:23:05 np0005589310 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 20 13:23:05 np0005589310 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 13:23:05 np0005589310 systemd[1]: Mounting /sysroot...
Jan 20 13:23:06 np0005589310 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 20 13:23:06 np0005589310 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 20 13:23:06 np0005589310 kernel: XFS (vda1): Ending clean mount
Jan 20 13:23:06 np0005589310 systemd[1]: Mounted /sysroot.
Jan 20 13:23:06 np0005589310 systemd[1]: Reached target Initrd Root File System.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 20 13:23:06 np0005589310 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 20 13:23:06 np0005589310 systemd[1]: Reached target Initrd File Systems.
Jan 20 13:23:06 np0005589310 systemd[1]: Reached target Initrd Default Target.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting dracut mount hook...
Jan 20 13:23:06 np0005589310 systemd[1]: Finished dracut mount hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 20 13:23:06 np0005589310 rpc.idmapd[448]: exiting on signal 15
Jan 20 13:23:06 np0005589310 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Network.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Timer Units.
Jan 20 13:23:06 np0005589310 systemd[1]: dbus.socket: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Initrd Default Target.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Basic System.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Initrd Root Device.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Initrd /usr File System.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Path Units.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Remote File Systems.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Slice Units.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Socket Units.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target System Initialization.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Local File Systems.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Swaps.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut mount hook.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut pre-mount hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut initqueue hook.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Coldplug All udev Devices.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut pre-trigger hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Setup Virtual Console.
Jan 20 13:23:06 np0005589310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Closed udev Control Socket.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Closed udev Kernel Socket.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut pre-udev hook.
Jan 20 13:23:06 np0005589310 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped dracut cmdline hook.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting Cleanup udev Database...
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 20 13:23:06 np0005589310 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 20 13:23:06 np0005589310 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Stopped Create System Users.
Jan 20 13:23:06 np0005589310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 20 13:23:06 np0005589310 systemd[1]: Finished Cleanup udev Database.
Jan 20 13:23:06 np0005589310 systemd[1]: Reached target Switch Root.
Jan 20 13:23:06 np0005589310 systemd[1]: Starting Switch Root...
Jan 20 13:23:06 np0005589310 systemd[1]: Switching root.
Jan 20 13:23:06 np0005589310 systemd-journald[309]: Journal stopped
Jan 20 13:23:07 np0005589310 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 20 13:23:07 np0005589310 kernel: audit: type=1404 audit(1768933386.864:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:23:07 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:23:07 np0005589310 kernel: audit: type=1403 audit(1768933387.001:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 20 13:23:07 np0005589310 systemd: Successfully loaded SELinux policy in 140.105ms.
Jan 20 13:23:07 np0005589310 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.306ms.
Jan 20 13:23:07 np0005589310 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 13:23:07 np0005589310 systemd: Detected virtualization kvm.
Jan 20 13:23:07 np0005589310 systemd: Detected architecture x86-64.
Jan 20 13:23:07 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:23:07 np0005589310 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd: Stopped Switch Root.
Jan 20 13:23:07 np0005589310 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 20 13:23:07 np0005589310 systemd: Created slice Slice /system/getty.
Jan 20 13:23:07 np0005589310 systemd: Created slice Slice /system/serial-getty.
Jan 20 13:23:07 np0005589310 systemd: Created slice Slice /system/sshd-keygen.
Jan 20 13:23:07 np0005589310 systemd: Created slice User and Session Slice.
Jan 20 13:23:07 np0005589310 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 13:23:07 np0005589310 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 20 13:23:07 np0005589310 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 20 13:23:07 np0005589310 systemd: Reached target Local Encrypted Volumes.
Jan 20 13:23:07 np0005589310 systemd: Stopped target Switch Root.
Jan 20 13:23:07 np0005589310 systemd: Stopped target Initrd File Systems.
Jan 20 13:23:07 np0005589310 systemd: Stopped target Initrd Root File System.
Jan 20 13:23:07 np0005589310 systemd: Reached target Local Integrity Protected Volumes.
Jan 20 13:23:07 np0005589310 systemd: Reached target Path Units.
Jan 20 13:23:07 np0005589310 systemd: Reached target rpc_pipefs.target.
Jan 20 13:23:07 np0005589310 systemd: Reached target Slice Units.
Jan 20 13:23:07 np0005589310 systemd: Reached target Swaps.
Jan 20 13:23:07 np0005589310 systemd: Reached target Local Verity Protected Volumes.
Jan 20 13:23:07 np0005589310 systemd: Listening on RPCbind Server Activation Socket.
Jan 20 13:23:07 np0005589310 systemd: Reached target RPC Port Mapper.
Jan 20 13:23:07 np0005589310 systemd: Listening on Process Core Dump Socket.
Jan 20 13:23:07 np0005589310 systemd: Listening on initctl Compatibility Named Pipe.
Jan 20 13:23:07 np0005589310 systemd: Listening on udev Control Socket.
Jan 20 13:23:07 np0005589310 systemd: Listening on udev Kernel Socket.
Jan 20 13:23:07 np0005589310 systemd: Mounting Huge Pages File System...
Jan 20 13:23:07 np0005589310 systemd: Mounting POSIX Message Queue File System...
Jan 20 13:23:07 np0005589310 systemd: Mounting Kernel Debug File System...
Jan 20 13:23:07 np0005589310 systemd: Mounting Kernel Trace File System...
Jan 20 13:23:07 np0005589310 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:23:07 np0005589310 systemd: Starting Create List of Static Device Nodes...
Jan 20 13:23:07 np0005589310 systemd: Starting Load Kernel Module configfs...
Jan 20 13:23:07 np0005589310 systemd: Starting Load Kernel Module drm...
Jan 20 13:23:07 np0005589310 systemd: Starting Load Kernel Module efi_pstore...
Jan 20 13:23:07 np0005589310 systemd: Starting Load Kernel Module fuse...
Jan 20 13:23:07 np0005589310 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 20 13:23:07 np0005589310 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd: Stopped File System Check on Root Device.
Jan 20 13:23:07 np0005589310 systemd: Stopped Journal Service.
Jan 20 13:23:07 np0005589310 systemd: Starting Journal Service...
Jan 20 13:23:07 np0005589310 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 13:23:07 np0005589310 systemd: Starting Generate network units from Kernel command line...
Jan 20 13:23:07 np0005589310 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:23:07 np0005589310 systemd: Starting Remount Root and Kernel File Systems...
Jan 20 13:23:07 np0005589310 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 20 13:23:07 np0005589310 systemd: Starting Apply Kernel Variables...
Jan 20 13:23:07 np0005589310 systemd: Starting Coldplug All udev Devices...
Jan 20 13:23:07 np0005589310 kernel: fuse: init (API version 7.37)
Jan 20 13:23:07 np0005589310 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 20 13:23:07 np0005589310 systemd: Mounted Huge Pages File System.
Jan 20 13:23:07 np0005589310 systemd: Mounted POSIX Message Queue File System.
Jan 20 13:23:07 np0005589310 systemd: Mounted Kernel Debug File System.
Jan 20 13:23:07 np0005589310 systemd: Mounted Kernel Trace File System.
Jan 20 13:23:07 np0005589310 systemd-journald[679]: Journal started
Jan 20 13:23:07 np0005589310 systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:23:07 np0005589310 systemd[1]: Queued start job for default target Multi-User System.
Jan 20 13:23:07 np0005589310 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd: Started Journal Service.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 13:23:07 np0005589310 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:23:07 np0005589310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 20 13:23:07 np0005589310 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Load Kernel Module fuse.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Generate network units from Kernel command line.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:23:07 np0005589310 kernel: ACPI: bus type drm_connector registered
Jan 20 13:23:07 np0005589310 systemd[1]: Mounting FUSE Control File System...
Jan 20 13:23:07 np0005589310 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Rebuild Hardware Database...
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 20 13:23:07 np0005589310 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Load/Save OS Random Seed...
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Create System Users...
Jan 20 13:23:07 np0005589310 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Load Kernel Module drm.
Jan 20 13:23:07 np0005589310 systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 13:23:07 np0005589310 systemd-journald[679]: Received client request to flush runtime journal.
Jan 20 13:23:07 np0005589310 systemd[1]: Mounted FUSE Control File System.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Load/Save OS Random Seed.
Jan 20 13:23:07 np0005589310 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Create System Users.
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Coldplug All udev Devices.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 13:23:07 np0005589310 systemd[1]: Reached target Preparation for Local File Systems.
Jan 20 13:23:07 np0005589310 systemd[1]: Reached target Local File Systems.
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 20 13:23:07 np0005589310 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 20 13:23:07 np0005589310 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 20 13:23:07 np0005589310 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Automatic Boot Loader Update...
Jan 20 13:23:07 np0005589310 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 13:23:07 np0005589310 bootctl[696]: Couldn't find EFI system partition, skipping.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Automatic Boot Loader Update.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Security Auditing Service...
Jan 20 13:23:07 np0005589310 systemd[1]: Starting RPC Bind...
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Rebuild Journal Catalog...
Jan 20 13:23:07 np0005589310 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 20 13:23:07 np0005589310 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Rebuild Journal Catalog.
Jan 20 13:23:07 np0005589310 systemd[1]: Started RPC Bind.
Jan 20 13:23:07 np0005589310 augenrules[707]: /sbin/augenrules: No change
Jan 20 13:23:07 np0005589310 augenrules[722]: No rules
Jan 20 13:23:07 np0005589310 augenrules[722]: enabled 1
Jan 20 13:23:07 np0005589310 augenrules[722]: failure 1
Jan 20 13:23:07 np0005589310 augenrules[722]: pid 702
Jan 20 13:23:07 np0005589310 augenrules[722]: rate_limit 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_limit 8192
Jan 20 13:23:07 np0005589310 augenrules[722]: lost 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog 2
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time 60000
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time_actual 0
Jan 20 13:23:07 np0005589310 augenrules[722]: enabled 1
Jan 20 13:23:07 np0005589310 augenrules[722]: failure 1
Jan 20 13:23:07 np0005589310 augenrules[722]: pid 702
Jan 20 13:23:07 np0005589310 augenrules[722]: rate_limit 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_limit 8192
Jan 20 13:23:07 np0005589310 augenrules[722]: lost 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time 60000
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time_actual 0
Jan 20 13:23:07 np0005589310 augenrules[722]: enabled 1
Jan 20 13:23:07 np0005589310 augenrules[722]: failure 1
Jan 20 13:23:07 np0005589310 augenrules[722]: pid 702
Jan 20 13:23:07 np0005589310 augenrules[722]: rate_limit 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_limit 8192
Jan 20 13:23:07 np0005589310 augenrules[722]: lost 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog 0
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time 60000
Jan 20 13:23:07 np0005589310 augenrules[722]: backlog_wait_time_actual 0
Jan 20 13:23:07 np0005589310 systemd[1]: Started Security Auditing Service.
Jan 20 13:23:07 np0005589310 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 20 13:23:07 np0005589310 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 20 13:23:08 np0005589310 systemd[1]: Finished Rebuild Hardware Database.
Jan 20 13:23:08 np0005589310 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 13:23:08 np0005589310 systemd[1]: Starting Update is Completed...
Jan 20 13:23:08 np0005589310 systemd[1]: Finished Update is Completed.
Jan 20 13:23:08 np0005589310 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 13:23:08 np0005589310 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target System Initialization.
Jan 20 13:23:08 np0005589310 systemd[1]: Started dnf makecache --timer.
Jan 20 13:23:08 np0005589310 systemd[1]: Started Daily rotation of log files.
Jan 20 13:23:08 np0005589310 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target Timer Units.
Jan 20 13:23:08 np0005589310 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 13:23:08 np0005589310 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target Socket Units.
Jan 20 13:23:08 np0005589310 systemd[1]: Starting D-Bus System Message Bus...
Jan 20 13:23:08 np0005589310 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:23:08 np0005589310 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 20 13:23:08 np0005589310 systemd[1]: Starting Load Kernel Module configfs...
Jan 20 13:23:08 np0005589310 systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:23:08 np0005589310 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 13:23:08 np0005589310 systemd[1]: Finished Load Kernel Module configfs.
Jan 20 13:23:08 np0005589310 systemd[1]: Started D-Bus System Message Bus.
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target Basic System.
Jan 20 13:23:08 np0005589310 dbus-broker-lau[760]: Ready
Jan 20 13:23:08 np0005589310 systemd[1]: Starting NTP client/server...
Jan 20 13:23:08 np0005589310 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 20 13:23:08 np0005589310 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 20 13:23:08 np0005589310 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 20 13:23:08 np0005589310 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 20 13:23:08 np0005589310 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 20 13:23:08 np0005589310 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 20 13:23:08 np0005589310 chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:23:08 np0005589310 chronyd[784]: Loaded 0 symmetric keys
Jan 20 13:23:08 np0005589310 chronyd[784]: Using right/UTC timezone to obtain leap second data
Jan 20 13:23:08 np0005589310 chronyd[784]: Loaded seccomp filter (level 2)
Jan 20 13:23:08 np0005589310 systemd[1]: Starting IPv4 firewall with iptables...
Jan 20 13:23:08 np0005589310 systemd[1]: Started irqbalance daemon.
Jan 20 13:23:08 np0005589310 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 20 13:23:08 np0005589310 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:23:08 np0005589310 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:23:08 np0005589310 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target sshd-keygen.target.
Jan 20 13:23:08 np0005589310 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 20 13:23:08 np0005589310 systemd[1]: Reached target User and Group Name Lookups.
Jan 20 13:23:08 np0005589310 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 20 13:23:08 np0005589310 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 20 13:23:08 np0005589310 kernel: Console: switching to colour dummy device 80x25
Jan 20 13:23:08 np0005589310 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 20 13:23:08 np0005589310 kernel: [drm] features: -context_init
Jan 20 13:23:08 np0005589310 kernel: [drm] number of scanouts: 1
Jan 20 13:23:08 np0005589310 kernel: [drm] number of cap sets: 0
Jan 20 13:23:08 np0005589310 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 20 13:23:08 np0005589310 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 20 13:23:08 np0005589310 kernel: Console: switching to colour frame buffer device 128x48
Jan 20 13:23:08 np0005589310 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 20 13:23:08 np0005589310 systemd[1]: Starting User Login Management...
Jan 20 13:23:08 np0005589310 kernel: kvm_amd: TSC scaling supported
Jan 20 13:23:08 np0005589310 kernel: kvm_amd: Nested Virtualization enabled
Jan 20 13:23:08 np0005589310 kernel: kvm_amd: Nested Paging enabled
Jan 20 13:23:08 np0005589310 kernel: kvm_amd: LBR virtualization supported
Jan 20 13:23:08 np0005589310 systemd[1]: Started NTP client/server.
Jan 20 13:23:08 np0005589310 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 20 13:23:08 np0005589310 systemd-logind[797]: New seat seat0.
Jan 20 13:23:08 np0005589310 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 20 13:23:08 np0005589310 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 13:23:08 np0005589310 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 13:23:08 np0005589310 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 20 13:23:08 np0005589310 systemd[1]: Started User Login Management.
Jan 20 13:23:08 np0005589310 iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Jan 20 13:23:08 np0005589310 systemd[1]: Finished IPv4 firewall with iptables.
Jan 20 13:23:08 np0005589310 cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 20 Jan 2026 18:23:08 +0000. Up 6.47 seconds.
Jan 20 13:23:09 np0005589310 systemd[1]: run-cloud\x2dinit-tmp-tmpgem3t85w.mount: Deactivated successfully.
Jan 20 13:23:09 np0005589310 systemd[1]: Starting Hostname Service...
Jan 20 13:23:09 np0005589310 systemd[1]: Started Hostname Service.
Jan 20 13:23:09 np0005589310 systemd-hostnamed[854]: Hostname set to <np0005589310.novalocal> (static)
Jan 20 13:23:09 np0005589310 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 20 13:23:09 np0005589310 systemd[1]: Reached target Preparation for Network.
Jan 20 13:23:09 np0005589310 systemd[1]: Starting Network Manager...
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6794] NetworkManager (version 1.54.3-2.el9) is starting... (boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6800] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6887] manager[0x560cf1df7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6927] hostname: hostname: using hostnamed
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6927] hostname: static hostname changed from (none) to "np0005589310.novalocal"
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.6933] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7108] manager[0x560cf1df7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7109] manager[0x560cf1df7000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7155] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7155] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7156] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7157] manager: Networking is enabled by state file
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7159] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:23:09 np0005589310 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7169] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7193] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7205] dhcp: init: Using DHCP client 'internal'
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7207] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7220] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7228] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7237] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7247] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7251] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7283] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7286] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7288] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7293] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7295] device (eth0): carrier: link connected
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7298] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7303] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7315] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7320] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7320] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7323] manager: NetworkManager state is now CONNECTING
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7324] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7331] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7335] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:23:09 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:23:09 np0005589310 systemd[1]: Started Network Manager.
Jan 20 13:23:09 np0005589310 systemd[1]: Reached target Network.
Jan 20 13:23:09 np0005589310 systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:23:09 np0005589310 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 20 13:23:09 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7594] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7597] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:23:09 np0005589310 NetworkManager[858]: <info>  [1768933389.7603] device (lo): Activation: successful, device activated.
Jan 20 13:23:09 np0005589310 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 20 13:23:09 np0005589310 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 13:23:09 np0005589310 systemd[1]: Reached target NFS client services.
Jan 20 13:23:09 np0005589310 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 13:23:09 np0005589310 systemd[1]: Reached target Remote File Systems.
Jan 20 13:23:09 np0005589310 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1452] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1464] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1486] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1538] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1539] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1541] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1543] device (eth0): Activation: successful, device activated.
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1547] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:23:10 np0005589310 NetworkManager[858]: <info>  [1768933390.1549] manager: startup complete
Jan 20 13:23:10 np0005589310 systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:23:10 np0005589310 systemd[1]: Starting Cloud-init: Network Stage...
Jan 20 13:23:10 np0005589310 cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 20 Jan 2026 18:23:10 +0000. Up 8.10 seconds.
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.210         | 255.255.255.0 | global | fa:16:3e:cb:85:96 |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fecb:8596/64 |       .       |  link  | fa:16:3e:cb:85:96 |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 20 13:23:10 np0005589310 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 13:23:12 np0005589310 cloud-init[921]: Generating public/private rsa key pair.
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key fingerprint is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: SHA256:xq/BKHwS7OYSbvQhm5GNkVJ1N+8ccXAte0mKmDWDeIE root@np0005589310.novalocal
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key's randomart image is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: +---[RSA 3072]----+
Jan 20 13:23:12 np0005589310 cloud-init[921]: |   .. .o+oo.o.   |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |  .  .E.oo++. o  |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | . .   . +o+ = . |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |. o.   .oo..o o  |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | . =o   S o  .   |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |  Boo. + .       |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | o B*.o o .      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |  *o.+   o       |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | . ..   .        |
Jan 20 13:23:12 np0005589310 cloud-init[921]: +----[SHA256]-----+
Jan 20 13:23:12 np0005589310 cloud-init[921]: Generating public/private ecdsa key pair.
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key fingerprint is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: SHA256:Pi/w76stpzFlcqqESeZJim8TbgmEFsOJOR+R+Xn/+EQ root@np0005589310.novalocal
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key's randomart image is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: +---[ECDSA 256]---+
Jan 20 13:23:12 np0005589310 cloud-init[921]: |o.o+             |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |+=+              |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |.ooo .           |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |.o. o .          |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |o   +. .S E      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |...* +...B       |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |.o.o= .o*o.      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | .*  . .=Bo      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | o..  . oXB.     |
Jan 20 13:23:12 np0005589310 cloud-init[921]: +----[SHA256]-----+
Jan 20 13:23:12 np0005589310 cloud-init[921]: Generating public/private ed25519 key pair.
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 20 13:23:12 np0005589310 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key fingerprint is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: SHA256:T2RZBGG+0a4+EG4dEp0XIljSlRrvCSsBJEPBHiJL+Gw root@np0005589310.novalocal
Jan 20 13:23:12 np0005589310 cloud-init[921]: The key's randomart image is:
Jan 20 13:23:12 np0005589310 cloud-init[921]: +--[ED25519 256]--+
Jan 20 13:23:12 np0005589310 cloud-init[921]: |.o=o. .+oo**+.   |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |+.oo. ..+++=.    |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |o* . .   =*..    |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |. E   . =oo+     |
Jan 20 13:23:12 np0005589310 cloud-init[921]: | .     oSBoo.    |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |      . =o+.     |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |       o .o      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |         ..      |
Jan 20 13:23:12 np0005589310 cloud-init[921]: |          ..     |
Jan 20 13:23:12 np0005589310 cloud-init[921]: +----[SHA256]-----+
Jan 20 13:23:12 np0005589310 systemd[1]: Finished Cloud-init: Network Stage.
Jan 20 13:23:12 np0005589310 systemd[1]: Reached target Cloud-config availability.
Jan 20 13:23:12 np0005589310 systemd[1]: Reached target Network is Online.
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Cloud-init: Config Stage...
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Crash recovery kernel arming...
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Notify NFS peers of a restart...
Jan 20 13:23:12 np0005589310 systemd[1]: Starting System Logging Service...
Jan 20 13:23:12 np0005589310 systemd[1]: Starting OpenSSH server daemon...
Jan 20 13:23:12 np0005589310 sm-notify[1006]: Version 2.5.4 starting
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Permit User Sessions...
Jan 20 13:23:12 np0005589310 systemd[1]: Started OpenSSH server daemon.
Jan 20 13:23:12 np0005589310 systemd[1]: Started Notify NFS peers of a restart.
Jan 20 13:23:12 np0005589310 systemd[1]: Finished Permit User Sessions.
Jan 20 13:23:12 np0005589310 systemd[1]: Started Command Scheduler.
Jan 20 13:23:12 np0005589310 systemd[1]: Started Getty on tty1.
Jan 20 13:23:12 np0005589310 systemd[1]: Started Serial Getty on ttyS0.
Jan 20 13:23:12 np0005589310 systemd[1]: Reached target Login Prompts.
Jan 20 13:23:12 np0005589310 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 20 13:23:12 np0005589310 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 20 13:23:12 np0005589310 systemd[1]: Started System Logging Service.
Jan 20 13:23:12 np0005589310 systemd[1]: Reached target Multi-User System.
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 20 13:23:12 np0005589310 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 20 13:23:12 np0005589310 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 20 13:23:12 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:23:12 np0005589310 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Jan 20 13:23:12 np0005589310 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 20 13:23:12 np0005589310 cloud-init[1148]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 20 Jan 2026 18:23:12 +0000. Up 10.15 seconds.
Jan 20 13:23:12 np0005589310 systemd[1]: Finished Cloud-init: Config Stage.
Jan 20 13:23:12 np0005589310 systemd[1]: Starting Cloud-init: Final Stage...
Jan 20 13:23:12 np0005589310 dracut[1267]: dracut-057-102.git20250818.el9
Jan 20 13:23:12 np0005589310 cloud-init[1285]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 20 Jan 2026 18:23:12 +0000. Up 10.57 seconds.
Jan 20 13:23:13 np0005589310 dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 20 13:23:13 np0005589310 cloud-init[1302]: #############################################################
Jan 20 13:23:13 np0005589310 cloud-init[1305]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 20 13:23:13 np0005589310 cloud-init[1310]: 256 SHA256:Pi/w76stpzFlcqqESeZJim8TbgmEFsOJOR+R+Xn/+EQ root@np0005589310.novalocal (ECDSA)
Jan 20 13:23:13 np0005589310 cloud-init[1314]: 256 SHA256:T2RZBGG+0a4+EG4dEp0XIljSlRrvCSsBJEPBHiJL+Gw root@np0005589310.novalocal (ED25519)
Jan 20 13:23:13 np0005589310 cloud-init[1321]: 3072 SHA256:xq/BKHwS7OYSbvQhm5GNkVJ1N+8ccXAte0mKmDWDeIE root@np0005589310.novalocal (RSA)
Jan 20 13:23:13 np0005589310 cloud-init[1323]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 20 13:23:13 np0005589310 cloud-init[1324]: #############################################################
Jan 20 13:23:13 np0005589310 cloud-init[1285]: Cloud-init v. 24.4-8.el9 finished at Tue, 20 Jan 2026 18:23:13 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.77 seconds
Jan 20 13:23:13 np0005589310 systemd[1]: Finished Cloud-init: Final Stage.
Jan 20 13:23:13 np0005589310 systemd[1]: Reached target Cloud-init target.
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:23:13 np0005589310 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: memstrack is not available
Jan 20 13:23:14 np0005589310 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 13:23:14 np0005589310 dracut[1269]: memstrack is not available
Jan 20 13:23:14 np0005589310 dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 13:23:14 np0005589310 dracut[1269]: *** Including module: systemd ***
Jan 20 13:23:15 np0005589310 dracut[1269]: *** Including module: fips ***
Jan 20 13:23:15 np0005589310 chronyd[784]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Jan 20 13:23:15 np0005589310 chronyd[784]: System clock TAI offset set to 37 seconds
Jan 20 13:23:15 np0005589310 dracut[1269]: *** Including module: systemd-initrd ***
Jan 20 13:23:15 np0005589310 dracut[1269]: *** Including module: i18n ***
Jan 20 13:23:15 np0005589310 dracut[1269]: *** Including module: drm ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: prefixdevname ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: kernel-modules ***
Jan 20 13:23:16 np0005589310 kernel: block vda: the capability attribute has been deprecated.
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: qemu ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: fstab-sys ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: rootfs-block ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: terminfo ***
Jan 20 13:23:16 np0005589310 dracut[1269]: *** Including module: udev-rules ***
Jan 20 13:23:17 np0005589310 dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 20 13:23:17 np0005589310 dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 20 13:23:17 np0005589310 dracut[1269]: *** Including module: virtiofs ***
Jan 20 13:23:17 np0005589310 dracut[1269]: *** Including module: dracut-systemd ***
Jan 20 13:23:17 np0005589310 dracut[1269]: *** Including module: usrmount ***
Jan 20 13:23:17 np0005589310 dracut[1269]: *** Including module: base ***
Jan 20 13:23:18 np0005589310 dracut[1269]: *** Including module: fs-lib ***
Jan 20 13:23:18 np0005589310 dracut[1269]: *** Including module: kdumpbase ***
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 25 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 31 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 28 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 32 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 30 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 20 13:23:18 np0005589310 irqbalance[789]: IRQ 29 affinity is now unmanaged
Jan 20 13:23:18 np0005589310 dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 20 13:23:18 np0005589310 dracut[1269]:  microcode_ctl module: mangling fw_dir
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 20 13:23:18 np0005589310 dracut[1269]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 20 13:23:18 np0005589310 dracut[1269]: *** Including module: openssl ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Including module: shutdown ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Including module: squash ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Including modules done ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Installing kernel module dependencies ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Installing kernel module dependencies done ***
Jan 20 13:23:19 np0005589310 dracut[1269]: *** Resolving executable dependencies ***
Jan 20 13:23:20 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:23:21 np0005589310 dracut[1269]: *** Resolving executable dependencies done ***
Jan 20 13:23:21 np0005589310 dracut[1269]: *** Generating early-microcode cpio image ***
Jan 20 13:23:21 np0005589310 dracut[1269]: *** Store current command line parameters ***
Jan 20 13:23:21 np0005589310 dracut[1269]: Stored kernel commandline:
Jan 20 13:23:21 np0005589310 dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 20 13:23:21 np0005589310 dracut[1269]: *** Install squash loader ***
Jan 20 13:23:22 np0005589310 dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 20 13:23:23 np0005589310 dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 20 13:23:23 np0005589310 dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 20 13:23:23 np0005589310 dracut[1269]: *** Hardlinking files ***
Jan 20 13:23:23 np0005589310 dracut[1269]: *** Hardlinking files done ***
Jan 20 13:23:24 np0005589310 dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 20 13:23:24 np0005589310 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Jan 20 13:23:24 np0005589310 kdumpctl[1019]: kdump: Starting kdump: [OK]
Jan 20 13:23:24 np0005589310 systemd[1]: Finished Crash recovery kernel arming.
Jan 20 13:23:24 np0005589310 systemd[1]: Startup finished in 1.607s (kernel) + 2.881s (initrd) + 17.981s (userspace) = 22.470s.
Jan 20 13:23:39 np0005589310 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:26:20 np0005589310 systemd[1]: Created slice User Slice of UID 1000.
Jan 20 13:26:20 np0005589310 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 20 13:26:20 np0005589310 systemd-logind[797]: New session 1 of user zuul.
Jan 20 13:26:20 np0005589310 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 20 13:26:20 np0005589310 systemd[1]: Starting User Manager for UID 1000...
Jan 20 13:26:20 np0005589310 systemd[4314]: Queued start job for default target Main User Target.
Jan 20 13:26:20 np0005589310 systemd[4314]: Created slice User Application Slice.
Jan 20 13:26:20 np0005589310 systemd[4314]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 13:26:20 np0005589310 systemd[4314]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 13:26:20 np0005589310 systemd[4314]: Reached target Paths.
Jan 20 13:26:20 np0005589310 systemd[4314]: Reached target Timers.
Jan 20 13:26:20 np0005589310 systemd[4314]: Starting D-Bus User Message Bus Socket...
Jan 20 13:26:20 np0005589310 systemd[4314]: Starting Create User's Volatile Files and Directories...
Jan 20 13:26:20 np0005589310 systemd[4314]: Listening on D-Bus User Message Bus Socket.
Jan 20 13:26:20 np0005589310 systemd[4314]: Reached target Sockets.
Jan 20 13:26:20 np0005589310 systemd[4314]: Finished Create User's Volatile Files and Directories.
Jan 20 13:26:20 np0005589310 systemd[4314]: Reached target Basic System.
Jan 20 13:26:20 np0005589310 systemd[4314]: Reached target Main User Target.
Jan 20 13:26:20 np0005589310 systemd[4314]: Startup finished in 233ms.
Jan 20 13:26:20 np0005589310 systemd[1]: Started User Manager for UID 1000.
Jan 20 13:26:20 np0005589310 systemd[1]: Started Session 1 of User zuul.
Jan 20 13:26:21 np0005589310 python3[4397]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:26:23 np0005589310 python3[4425]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:26:29 np0005589310 python3[4483]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:26:30 np0005589310 python3[4523]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 20 13:26:33 np0005589310 python3[4549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh3Yi5Xd7DiYa1i0K0hEQRl3npfFeF2fqveQSvJ3+Qh32GCocOe6DbPFsG9H7BUHVEflWNJZdPPlCUM6C7xU61TwiHRqIfRKwDP1ZZZ0c9F1IEp4kgnp+KxBpgAFTpPr0g8DlLHgZvJCKpyLTjQm3nxxXkLT/AM0aER72bKzo+yElY3FC/T6Vlg4zUI5whCnrOdFi460EqOWARONWoFl4YQvpnXjL1oSiyy/AA2SLZMmu8pnl8mZAtlFs96/T6+MbAiycKiV9aiIWM74tzjY/FQ43abQCIFQ2LFjCzP+CKDzTQkhX+FFXDEpV9sFfE7T5L2IwqBGu8OmPOgXKyZRFUWYdJx+HWYiUq4j+8LRrEqLxB5fs/2Zn4CBcTKG1Qkoz2vcDiox/P0zVycwzQFSwMiqPxWGAsRhrGubvXvf4HCaBQzRjRp/0xWjKqqqYOhuK/ThW7fpEkuTvS7g1A+oJZNN7gIt2PgK45UOOSCD1xHtQLeR5HuNR7giWXHVKO7T0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:33 np0005589310 python3[4573]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:34 np0005589310 python3[4672]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:34 np0005589310 python3[4743]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933593.6811175-207-184448163679159/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=480e6c18599849fe9f94b7a2a9bafd87_id_rsa follow=False checksum=e533cfffdb60e29c3d9ad08b7280ab1612aed717 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:35 np0005589310 python3[4866]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:35 np0005589310 python3[4937]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933594.7762268-240-14635357441409/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=480e6c18599849fe9f94b7a2a9bafd87_id_rsa.pub follow=False checksum=3b367e5376f0bd06906e6d88484065951910e849 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:36 np0005589310 python3[4985]: ansible-ping Invoked with data=pong
Jan 20 13:26:37 np0005589310 python3[5009]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:26:39 np0005589310 python3[5067]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 20 13:26:40 np0005589310 python3[5099]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:40 np0005589310 python3[5123]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:40 np0005589310 python3[5147]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:41 np0005589310 python3[5171]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:41 np0005589310 python3[5195]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:41 np0005589310 python3[5219]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:43 np0005589310 python3[5245]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:44 np0005589310 python3[5323]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:44 np0005589310 python3[5396]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768933603.8398094-21-152098359614112/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:45 np0005589310 python3[5444]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:45 np0005589310 python3[5468]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:45 np0005589310 python3[5492]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:46 np0005589310 python3[5516]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:46 np0005589310 python3[5540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:46 np0005589310 python3[5564]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:47 np0005589310 python3[5588]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:47 np0005589310 python3[5612]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:47 np0005589310 python3[5636]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:47 np0005589310 python3[5660]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:48 np0005589310 python3[5684]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:48 np0005589310 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 20 13:26:48 np0005589310 irqbalance[789]: IRQ 26 affinity is now unmanaged
Jan 20 13:26:48 np0005589310 python3[5708]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:48 np0005589310 python3[5732]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:49 np0005589310 python3[5756]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:49 np0005589310 python3[5780]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:49 np0005589310 python3[5804]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:49 np0005589310 python3[5828]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:50 np0005589310 python3[5852]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:50 np0005589310 python3[5876]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:50 np0005589310 python3[5900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:50 np0005589310 python3[5924]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:51 np0005589310 python3[5948]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:51 np0005589310 python3[5972]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:51 np0005589310 python3[5996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:51 np0005589310 python3[6020]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:52 np0005589310 python3[6044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:26:55 np0005589310 python3[6070]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:26:55 np0005589310 systemd[1]: Starting Time & Date Service...
Jan 20 13:26:55 np0005589310 systemd[1]: Started Time & Date Service.
Jan 20 13:26:55 np0005589310 systemd-timedated[6072]: Changed time zone to 'UTC' (UTC).
Jan 20 13:26:55 np0005589310 python3[6101]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:56 np0005589310 python3[6177]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:56 np0005589310 python3[6248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768933615.8860524-153-242223373032831/source _original_basename=tmp9mb_b4ir follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:57 np0005589310 python3[6348]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:57 np0005589310 python3[6419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768933616.7911553-183-191478135456329/source _original_basename=tmpi8gsu5pn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:58 np0005589310 python3[6521]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:26:58 np0005589310 python3[6594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768933617.8833928-231-99166232399149/source _original_basename=tmppbey079h follow=False checksum=675da38221554070fad736c9d717667e6ac7d120 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:26:59 np0005589310 python3[6642]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:59 np0005589310 python3[6668]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:26:59 np0005589310 python3[6748]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:27:00 np0005589310 python3[6821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768933619.4859846-273-191573941791787/source _original_basename=tmp125ww3lk follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:27:00 np0005589310 python3[6872]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-120e-581e-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:27:01 np0005589310 python3[6900]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-120e-581e-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 20 13:27:02 np0005589310 python3[6928]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:27:20 np0005589310 python3[6954]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:27:25 np0005589310 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 20 13:27:59 np0005589310 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 20 13:27:59 np0005589310 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5258] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:27:59 np0005589310 systemd-udevd[6957]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5521] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5544] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5546] device (eth1): carrier: link connected
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5548] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5552] policy: auto-activating connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5555] device (eth1): Activation: starting connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5556] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5558] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5561] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:27:59 np0005589310 NetworkManager[858]: <info>  [1768933679.5565] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:28:00 np0005589310 python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-feea-74cb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:28:10 np0005589310 python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:28:10 np0005589310 python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933690.2825618-102-184637403101572/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cd4a55093ad04d42dea8a9f1c133b61b367dadc0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:28:11 np0005589310 python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:28:11 np0005589310 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:28:11 np0005589310 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:28:11 np0005589310 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7419] caught SIGTERM, shutting down normally.
Jan 20 13:28:11 np0005589310 systemd[1]: Stopping Network Manager...
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7427] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7428] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7428] dhcp4 (eth0): state changed no lease
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7430] manager: NetworkManager state is now CONNECTING
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7524] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7524] dhcp4 (eth1): state changed no lease
Jan 20 13:28:11 np0005589310 NetworkManager[858]: <info>  [1768933691.7580] exiting (success)
Jan 20 13:28:11 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:28:11 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:28:11 np0005589310 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:28:11 np0005589310 systemd[1]: Stopped Network Manager.
Jan 20 13:28:11 np0005589310 systemd[1]: NetworkManager.service: Consumed 1.970s CPU time, 10.0M memory peak.
Jan 20 13:28:11 np0005589310 systemd[1]: Starting Network Manager...
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.8320] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.8323] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.8385] manager[0x55602a111000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:28:11 np0005589310 systemd[1]: Starting Hostname Service...
Jan 20 13:28:11 np0005589310 systemd[1]: Started Hostname Service.
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9081] hostname: hostname: using hostnamed
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9081] hostname: static hostname changed from (none) to "np0005589310.novalocal"
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9088] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9093] manager[0x55602a111000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9094] manager[0x55602a111000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9125] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9125] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9126] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9126] manager: Networking is enabled by state file
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9129] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9132] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9161] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9171] dhcp: init: Using DHCP client 'internal'
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9174] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9179] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9185] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9193] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9200] device (eth0): carrier: link connected
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9205] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9210] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9211] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9217] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9224] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9231] device (eth1): carrier: link connected
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9236] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9242] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa) (indicated)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9242] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9248] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9255] device (eth1): Activation: starting connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 13:28:11 np0005589310 systemd[1]: Started Network Manager.
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9263] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9268] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9270] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9272] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9275] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9278] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9280] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9283] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9286] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9293] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9297] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9307] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9309] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9329] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9331] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9336] device (lo): Activation: successful, device activated.
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9359] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9367] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:28:11 np0005589310 systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9445] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9466] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9468] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9472] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9475] device (eth0): Activation: successful, device activated.
Jan 20 13:28:11 np0005589310 NetworkManager[7195]: <info>  [1768933691.9482] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:28:12 np0005589310 python3[7271]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-feea-74cb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:28:22 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:28:41 np0005589310 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:28:50 np0005589310 systemd[4314]: Starting Mark boot as successful...
Jan 20 13:28:50 np0005589310 systemd[4314]: Finished Mark boot as successful.
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.3738] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:28:57 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:28:57 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4053] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4057] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4068] device (eth1): Activation: successful, device activated.
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4079] manager: startup complete
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4083] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <warn>  [1768933737.4092] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4104] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4194] dhcp4 (eth1): canceled DHCP transaction
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4195] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4195] dhcp4 (eth1): state changed no lease
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4217] policy: auto-activating connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4226] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4228] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4233] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4243] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4258] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4305] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4308] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:28:57 np0005589310 NetworkManager[7195]: <info>  [1768933737.4319] device (eth1): Activation: successful, device activated.
Jan 20 13:29:07 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:29:11 np0005589310 systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Jan 20 13:29:11 np0005589310 systemd-logind[797]: New session 3 of user zuul.
Jan 20 13:29:11 np0005589310 systemd[1]: Started Session 3 of User zuul.
Jan 20 13:29:12 np0005589310 python3[7386]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:29:12 np0005589310 python3[7459]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/ansible-tmp-1768933752.1180925-267-225970933910862/source _original_basename=tmpta63ztqu follow=False checksum=28a61f56a02f2805646416fe6ddd7237f7944961 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:29:15 np0005589310 systemd[1]: session-3.scope: Deactivated successfully.
Jan 20 13:29:15 np0005589310 systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Jan 20 13:29:15 np0005589310 systemd-logind[797]: Removed session 3.
Jan 20 13:31:50 np0005589310 systemd[4314]: Created slice User Background Tasks Slice.
Jan 20 13:31:50 np0005589310 systemd[4314]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 13:31:50 np0005589310 systemd[4314]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 13:36:44 np0005589310 systemd-logind[797]: New session 4 of user zuul.
Jan 20 13:36:44 np0005589310 systemd[1]: Started Session 4 of User zuul.
Jan 20 13:36:44 np0005589310 python3[7527]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-78e9-2ad2-00000000216f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:45 np0005589310 python3[7556]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:45 np0005589310 python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:45 np0005589310 python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:45 np0005589310 python3[7634]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:46 np0005589310 python3[7660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:46 np0005589310 python3[7738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:36:47 np0005589310 python3[7811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934206.4994287-498-233378964060786/source _original_basename=tmp694qpsgn follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:36:48 np0005589310 python3[7861]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 13:36:48 np0005589310 systemd[1]: Reloading.
Jan 20 13:36:48 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:36:49 np0005589310 python3[7917]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 20 13:36:50 np0005589310 python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:50 np0005589310 python3[7971]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:50 np0005589310 python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:51 np0005589310 python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:51 np0005589310 python3[8054]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-78e9-2ad2-000000002176-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:36:52 np0005589310 python3[8084]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 13:36:54 np0005589310 systemd[1]: session-4.scope: Deactivated successfully.
Jan 20 13:36:54 np0005589310 systemd[1]: session-4.scope: Consumed 3.934s CPU time.
Jan 20 13:36:54 np0005589310 systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Jan 20 13:36:54 np0005589310 systemd-logind[797]: Removed session 4.
Jan 20 13:36:56 np0005589310 systemd-logind[797]: New session 5 of user zuul.
Jan 20 13:36:56 np0005589310 systemd[1]: Started Session 5 of User zuul.
Jan 20 13:36:56 np0005589310 python3[8118]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 13:36:58 np0005589310 irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 20 13:36:58 np0005589310 irqbalance[789]: IRQ 27 affinity is now unmanaged
Jan 20 13:37:05 np0005589310 setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 20 13:37:05 np0005589310 setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 20 13:37:17 np0005589310 kernel: SELinux:  Converting 383 SID table entries...
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:37:17 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  Converting 386 SID table entries...
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:37:26 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:37:43 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:37:43 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:37:43 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:37:43 np0005589310 systemd[1]: Reloading.
Jan 20 13:37:43 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:37:43 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:37:48 np0005589310 python3[13087]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-386d-653e-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:37:49 np0005589310 kernel: evm: overlay not supported
Jan 20 13:37:49 np0005589310 systemd[4314]: Starting D-Bus User Message Bus...
Jan 20 13:37:49 np0005589310 dbus-broker-launch[13919]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 20 13:37:49 np0005589310 dbus-broker-launch[13919]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 20 13:37:49 np0005589310 systemd[4314]: Started D-Bus User Message Bus.
Jan 20 13:37:49 np0005589310 dbus-broker-lau[13919]: Ready
Jan 20 13:37:49 np0005589310 systemd[4314]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 13:37:49 np0005589310 systemd[4314]: Created slice Slice /user.
Jan 20 13:37:49 np0005589310 systemd[4314]: podman-13851.scope: unit configures an IP firewall, but not running as root.
Jan 20 13:37:49 np0005589310 systemd[4314]: (This warning is only shown for the first unit using IP firewalling.)
Jan 20 13:37:49 np0005589310 systemd[4314]: Started podman-13851.scope.
Jan 20 13:37:50 np0005589310 systemd[4314]: Started podman-pause-30ff7b45.scope.
Jan 20 13:37:50 np0005589310 python3[14034]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.246:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.246:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:37:50 np0005589310 python3[14034]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 20 13:37:51 np0005589310 systemd[1]: session-5.scope: Deactivated successfully.
Jan 20 13:37:51 np0005589310 systemd[1]: session-5.scope: Consumed 41.268s CPU time.
Jan 20 13:37:51 np0005589310 systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Jan 20 13:37:51 np0005589310 systemd-logind[797]: Removed session 5.
Jan 20 13:38:02 np0005589310 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 20 13:38:02 np0005589310 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 20 13:38:02 np0005589310 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 20 13:38:02 np0005589310 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 20 13:38:13 np0005589310 systemd-logind[797]: New session 6 of user zuul.
Jan 20 13:38:13 np0005589310 systemd[1]: Started Session 6 of User zuul.
Jan 20 13:38:14 np0005589310 python3[24826]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:38:14 np0005589310 python3[24989]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:38:15 np0005589310 python3[25403]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005589310.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 20 13:38:15 np0005589310 python3[25640]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 13:38:16 np0005589310 python3[25939]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:38:16 np0005589310 python3[26216]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934296.058853-135-41333304131391/source _original_basename=tmptf82l9_a follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:38:17 np0005589310 python3[26596]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 20 13:38:17 np0005589310 systemd[1]: Starting Hostname Service...
Jan 20 13:38:17 np0005589310 systemd[1]: Started Hostname Service.
Jan 20 13:38:17 np0005589310 systemd-hostnamed[26714]: Changed pretty hostname to 'compute-0'
Jan 20 13:38:17 np0005589310 systemd-hostnamed[26714]: Hostname set to <compute-0> (static)
Jan 20 13:38:17 np0005589310 NetworkManager[7195]: <info>  [1768934297.7834] hostname: static hostname changed from "np0005589310.novalocal" to "compute-0"
Jan 20 13:38:17 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:38:17 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:38:18 np0005589310 systemd[1]: session-6.scope: Deactivated successfully.
Jan 20 13:38:18 np0005589310 systemd[1]: session-6.scope: Consumed 2.184s CPU time.
Jan 20 13:38:18 np0005589310 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Jan 20 13:38:18 np0005589310 systemd-logind[797]: Removed session 6.
Jan 20 13:38:25 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:38:25 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:38:25 np0005589310 systemd[1]: man-db-cache-update.service: Consumed 51.328s CPU time.
Jan 20 13:38:25 np0005589310 systemd[1]: run-r3dfe753fb93748e9b72d93297ed76bf9.service: Deactivated successfully.
Jan 20 13:38:27 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:38:47 np0005589310 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:39:50 np0005589310 systemd[1]: Starting dnf makecache...
Jan 20 13:39:50 np0005589310 dnf[29958]: Failed determining last makecache time.
Jan 20 13:39:50 np0005589310 dnf[29958]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.4 kB     00:00
Jan 20 13:39:51 np0005589310 dnf[29958]: CentOS Stream 9 - AppStream                      29 kB/s | 6.8 kB     00:00
Jan 20 13:39:51 np0005589310 dnf[29958]: CentOS Stream 9 - CRB                            61 kB/s | 6.3 kB     00:00
Jan 20 13:39:51 np0005589310 dnf[29958]: CentOS Stream 9 - Extras packages                63 kB/s | 7.3 kB     00:00
Jan 20 13:39:51 np0005589310 dnf[29958]: Metadata cache created.
Jan 20 13:39:51 np0005589310 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 13:39:51 np0005589310 systemd[1]: Finished dnf makecache.
Jan 20 13:42:00 np0005589310 systemd-logind[797]: New session 7 of user zuul.
Jan 20 13:42:00 np0005589310 systemd[1]: Started Session 7 of User zuul.
Jan 20 13:42:01 np0005589310 python3[30044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:42:02 np0005589310 python3[30160]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:03 np0005589310 python3[30233]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:03 np0005589310 python3[30259]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:03 np0005589310 python3[30332]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:04 np0005589310 python3[30358]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:04 np0005589310 python3[30431]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:04 np0005589310 python3[30457]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:05 np0005589310 python3[30530]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:05 np0005589310 python3[30556]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:05 np0005589310 python3[30629]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:05 np0005589310 python3[30655]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:06 np0005589310 python3[30728]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:06 np0005589310 python3[30754]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 13:42:06 np0005589310 python3[30827]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:42:21 np0005589310 python3[30887]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:47:20 np0005589310 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Jan 20 13:47:20 np0005589310 systemd[1]: session-7.scope: Deactivated successfully.
Jan 20 13:47:20 np0005589310 systemd[1]: session-7.scope: Consumed 4.558s CPU time.
Jan 20 13:47:20 np0005589310 systemd-logind[797]: Removed session 7.
Jan 20 13:53:10 np0005589310 systemd-logind[797]: New session 8 of user zuul.
Jan 20 13:53:10 np0005589310 systemd[1]: Started Session 8 of User zuul.
Jan 20 13:53:11 np0005589310 python3.9[31057]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:12 np0005589310 python3.9[31238]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:20 np0005589310 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Jan 20 13:53:20 np0005589310 systemd[1]: session-8.scope: Deactivated successfully.
Jan 20 13:53:20 np0005589310 systemd[1]: session-8.scope: Consumed 7.385s CPU time.
Jan 20 13:53:20 np0005589310 systemd-logind[797]: Removed session 8.
Jan 20 13:53:35 np0005589310 systemd-logind[797]: New session 9 of user zuul.
Jan 20 13:53:35 np0005589310 systemd[1]: Started Session 9 of User zuul.
Jan 20 13:53:36 np0005589310 python3.9[31453]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 13:53:37 np0005589310 python3.9[31627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:38 np0005589310 python3.9[31779]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:53:38 np0005589310 python3.9[31932]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:53:39 np0005589310 python3.9[32084]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:40 np0005589310 python3.9[32236]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:53:41 np0005589310 python3.9[32359]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935219.8563318-68-245528657497906/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:42 np0005589310 python3.9[32511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:42 np0005589310 python3.9[32667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:53:43 np0005589310 python3.9[32819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:53:44 np0005589310 python3.9[32969]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:53:49 np0005589310 python3.9[33222]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:53:50 np0005589310 python3.9[33372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:51 np0005589310 python3.9[33526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:53:52 np0005589310 python3.9[33684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:53:53 np0005589310 python3.9[33768]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:54:43 np0005589310 systemd[1]: Reloading.
Jan 20 13:54:43 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:54:43 np0005589310 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 20 13:54:44 np0005589310 systemd[1]: Reloading.
Jan 20 13:54:44 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:54:44 np0005589310 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 20 13:54:44 np0005589310 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 20 13:54:44 np0005589310 systemd[1]: Reloading.
Jan 20 13:54:44 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:54:44 np0005589310 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 20 13:54:44 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 13:54:44 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 13:54:44 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 13:56:01 np0005589310 kernel: SELinux:  Converting 2722 SID table entries...
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:56:01 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:56:01 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 20 13:56:01 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:56:01 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:56:01 np0005589310 systemd[1]: Reloading.
Jan 20 13:56:01 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:01 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:56:02 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:56:02 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:56:02 np0005589310 systemd[1]: man-db-cache-update.service: Consumed 1.121s CPU time.
Jan 20 13:56:02 np0005589310 systemd[1]: run-rb123df4f28dc49dfaa370554c2e9c029.service: Deactivated successfully.
Jan 20 13:56:02 np0005589310 python3.9[35284]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:04 np0005589310 python3.9[35566]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 13:56:05 np0005589310 python3.9[35718]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 13:56:08 np0005589310 python3.9[35871]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:56:09 np0005589310 python3.9[36023]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 13:56:10 np0005589310 python3.9[36175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:56:11 np0005589310 python3.9[36327]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:56:11 np0005589310 python3.9[36450]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935370.7688947-231-39266725487616/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:56:13 np0005589310 python3.9[36602]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:56:15 np0005589310 python3.9[36754]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:16 np0005589310 python3.9[36908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:56:18 np0005589310 python3.9[37060]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 13:56:18 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 13:56:20 np0005589310 python3.9[37214]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:56:21 np0005589310 python3.9[37372]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:56:22 np0005589310 python3.9[37532]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 13:56:22 np0005589310 python3.9[37685]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:56:23 np0005589310 python3.9[37843]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 13:56:24 np0005589310 python3.9[37995]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:56:27 np0005589310 python3.9[38148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:56:27 np0005589310 python3.9[38300]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:56:28 np0005589310 python3.9[38423]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935387.4535568-350-72672085320745/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:56:29 np0005589310 python3.9[38575]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:56:30 np0005589310 systemd[1]: Starting Load Kernel Modules...
Jan 20 13:56:30 np0005589310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 20 13:56:30 np0005589310 kernel: Bridge firewalling registered
Jan 20 13:56:30 np0005589310 systemd-modules-load[38579]: Inserted module 'br_netfilter'
Jan 20 13:56:30 np0005589310 systemd[1]: Finished Load Kernel Modules.
Jan 20 13:56:31 np0005589310 python3.9[38735]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:56:31 np0005589310 python3.9[38858]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935390.8282022-373-199954707583882/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:56:32 np0005589310 python3.9[39010]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:56:35 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 13:56:35 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 13:56:36 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:56:36 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:56:36 np0005589310 systemd[1]: Reloading.
Jan 20 13:56:36 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:36 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:56:38 np0005589310 python3.9[40202]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:56:39 np0005589310 python3.9[41126]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 13:56:40 np0005589310 python3.9[42108]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:56:41 np0005589310 python3.9[43042]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:41 np0005589310 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:56:41 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:56:41 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:56:41 np0005589310 systemd[1]: man-db-cache-update.service: Consumed 4.593s CPU time.
Jan 20 13:56:41 np0005589310 systemd[1]: run-r0a68c49366404f70ac4684f2acfd1cf8.service: Deactivated successfully.
Jan 20 13:56:41 np0005589310 systemd[1]: Starting Authorization Manager...
Jan 20 13:56:41 np0005589310 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:56:41 np0005589310 polkitd[43397]: Started polkitd version 0.117
Jan 20 13:56:41 np0005589310 systemd[1]: Started Authorization Manager.
Jan 20 13:56:42 np0005589310 python3.9[43567]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:56:42 np0005589310 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 13:56:42 np0005589310 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 13:56:42 np0005589310 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 13:56:42 np0005589310 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 13:56:42 np0005589310 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 13:56:43 np0005589310 python3.9[43728]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 13:56:45 np0005589310 python3.9[43880]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:56:45 np0005589310 systemd[1]: Reloading.
Jan 20 13:56:45 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:46 np0005589310 python3.9[44069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:56:46 np0005589310 systemd[1]: Reloading.
Jan 20 13:56:46 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:56:47 np0005589310 python3.9[44258]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:48 np0005589310 python3.9[44411]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:48 np0005589310 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 20 13:56:48 np0005589310 python3.9[44564]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:51 np0005589310 python3.9[44726]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:56:51 np0005589310 python3.9[44879]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:56:51 np0005589310 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 13:56:51 np0005589310 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 13:56:51 np0005589310 systemd[1]: Stopping Apply Kernel Variables...
Jan 20 13:56:51 np0005589310 systemd[1]: Starting Apply Kernel Variables...
Jan 20 13:56:51 np0005589310 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 13:56:51 np0005589310 systemd[1]: Finished Apply Kernel Variables.
Jan 20 13:56:52 np0005589310 systemd[1]: session-9.scope: Deactivated successfully.
Jan 20 13:56:52 np0005589310 systemd[1]: session-9.scope: Consumed 2min 12.849s CPU time.
Jan 20 13:56:52 np0005589310 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Jan 20 13:56:52 np0005589310 systemd-logind[797]: Removed session 9.
Jan 20 13:56:58 np0005589310 systemd-logind[797]: New session 10 of user zuul.
Jan 20 13:56:58 np0005589310 systemd[1]: Started Session 10 of User zuul.
Jan 20 13:56:59 np0005589310 python3.9[45062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:57:00 np0005589310 python3.9[45218]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 13:57:01 np0005589310 python3.9[45371]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 13:57:02 np0005589310 python3.9[45529]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 13:57:03 np0005589310 python3.9[45689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:57:04 np0005589310 python3.9[45773]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 13:57:08 np0005589310 python3.9[45936]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:57:22 np0005589310 kernel: SELinux:  Converting 2735 SID table entries...
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:57:22 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:57:22 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 20 13:57:22 np0005589310 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 20 13:57:25 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:57:25 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:57:25 np0005589310 systemd[1]: Reloading.
Jan 20 13:57:25 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:25 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:25 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:57:25 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:57:25 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:57:25 np0005589310 systemd[1]: run-ra849372f00b04c8bbe297f2d8b287318.service: Deactivated successfully.
Jan 20 13:57:26 np0005589310 python3.9[47034]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 13:57:26 np0005589310 systemd[1]: Reloading.
Jan 20 13:57:26 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:27 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:27 np0005589310 systemd[1]: Starting Open vSwitch Database Unit...
Jan 20 13:57:27 np0005589310 chown[47076]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 20 13:57:27 np0005589310 ovs-ctl[47081]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 20 13:57:27 np0005589310 ovs-ctl[47081]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-ctl[47081]: Starting ovsdb-server [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-vsctl[47130]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 20 13:57:27 np0005589310 ovs-vsctl[47150]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"15f2b046-37e6-488b-9e52-3d187c798598\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 20 13:57:27 np0005589310 ovs-ctl[47081]: Configuring Open vSwitch system IDs [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-ctl[47081]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-vsctl[47156]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 13:57:27 np0005589310 systemd[1]: Started Open vSwitch Database Unit.
Jan 20 13:57:27 np0005589310 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 20 13:57:27 np0005589310 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 20 13:57:27 np0005589310 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 20 13:57:27 np0005589310 kernel: openvswitch: Open vSwitch switching datapath
Jan 20 13:57:27 np0005589310 ovs-ctl[47200]: Inserting openvswitch module [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-ctl[47169]: Starting ovs-vswitchd [  OK  ]
Jan 20 13:57:27 np0005589310 ovs-ctl[47169]: Enabling remote OVSDB managers [  OK  ]
Jan 20 13:57:27 np0005589310 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 20 13:57:27 np0005589310 ovs-vsctl[47218]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 13:57:27 np0005589310 systemd[1]: Starting Open vSwitch...
Jan 20 13:57:27 np0005589310 systemd[1]: Finished Open vSwitch.
Jan 20 13:57:28 np0005589310 python3.9[47369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:57:29 np0005589310 python3.9[47521]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 13:57:30 np0005589310 kernel: SELinux:  Converting 2749 SID table entries...
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 13:57:30 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 13:57:31 np0005589310 python3.9[47676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:57:32 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 20 13:57:32 np0005589310 python3.9[47834]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:57:34 np0005589310 python3.9[47987]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:57:35 np0005589310 python3.9[48274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 13:57:36 np0005589310 python3.9[48425]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:57:37 np0005589310 python3.9[48579]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:57:40 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:57:40 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:57:40 np0005589310 systemd[1]: Reloading.
Jan 20 13:57:40 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:40 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:40 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:57:40 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:57:40 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:57:40 np0005589310 systemd[1]: run-rbea9b80d3d0943aa97d2cf9a3f3c8ead.service: Deactivated successfully.
Jan 20 13:57:41 np0005589310 python3.9[48898]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:57:41 np0005589310 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 13:57:41 np0005589310 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 13:57:41 np0005589310 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 13:57:41 np0005589310 systemd[1]: Stopping Network Manager...
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5699] caught SIGTERM, shutting down normally.
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): state changed no lease
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5714] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:57:41 np0005589310 NetworkManager[7195]: <info>  [1768935461.5780] exiting (success)
Jan 20 13:57:41 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:57:41 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:57:41 np0005589310 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 13:57:41 np0005589310 systemd[1]: Stopped Network Manager.
Jan 20 13:57:41 np0005589310 systemd[1]: NetworkManager.service: Consumed 10.944s CPU time, 4.4M memory peak, read 0B from disk, written 21.5K to disk.
Jan 20 13:57:41 np0005589310 systemd[1]: Starting Network Manager...
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.6489] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.6492] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.6543] manager[0x55d9d0041000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 13:57:41 np0005589310 systemd[1]: Starting Hostname Service...
Jan 20 13:57:41 np0005589310 systemd[1]: Started Hostname Service.
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7346] hostname: hostname: using hostnamed
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7346] hostname: static hostname changed from (none) to "compute-0"
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7351] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7356] manager[0x55d9d0041000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7356] manager[0x55d9d0041000]: rfkill: WWAN hardware radio set enabled
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7375] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7383] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7383] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7384] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7384] manager: Networking is enabled by state file
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7387] settings: Loaded settings plugin: keyfile (internal)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7390] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7415] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7424] dhcp: init: Using DHCP client 'internal'
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7426] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7430] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7435] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7441] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7446] device (eth0): carrier: link connected
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7450] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7454] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7454] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7459] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7464] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7469] device (eth1): carrier: link connected
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7472] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7476] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a) (indicated)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7477] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7481] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7486] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7491] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 13:57:41 np0005589310 systemd[1]: Started Network Manager.
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7497] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7499] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7501] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7503] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7505] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7507] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7509] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7512] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7518] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7520] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7561] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7576] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7579] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7582] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7589] device (lo): Activation: successful, device activated.
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7602] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 13:57:41 np0005589310 systemd[1]: Starting Network Manager Wait Online...
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7673] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7681] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7684] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7689] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7693] device (eth1): Activation: successful, device activated.
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7762] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7765] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7770] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7774] device (eth0): Activation: successful, device activated.
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7781] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 13:57:41 np0005589310 NetworkManager[48913]: <info>  [1768935461.7784] manager: startup complete
Jan 20 13:57:41 np0005589310 systemd[1]: Finished Network Manager Wait Online.
Jan 20 13:57:42 np0005589310 python3.9[49124]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:57:51 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:57:52 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 13:57:52 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 13:57:52 np0005589310 systemd[1]: Reloading.
Jan 20 13:57:52 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:57:52 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:57:52 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 13:57:53 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 13:57:53 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 13:57:53 np0005589310 systemd[1]: run-rd12956085b6e45e4a80dd672b24344c0.service: Deactivated successfully.
Jan 20 13:57:53 np0005589310 python3.9[49584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:57:54 np0005589310 python3.9[49736]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:55 np0005589310 python3.9[49890]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:55 np0005589310 python3.9[50042]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:56 np0005589310 python3.9[50194]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:57 np0005589310 python3.9[50346]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:57 np0005589310 python3.9[50498]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:57:58 np0005589310 python3.9[50622]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935477.4060614-224-79759771986923/.source _original_basename=.ow39ac49 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:59 np0005589310 python3.9[50774]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:57:59 np0005589310 python3.9[50926]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 20 13:58:00 np0005589310 python3.9[51078]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:02 np0005589310 python3.9[51505]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 20 13:58:03 np0005589310 ansible-async_wrapper.py[51680]: Invoked with j188714558496 300 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.796708-290-272436610895250/AnsiballZ_edpm_os_net_config.py _
Jan 20 13:58:03 np0005589310 ansible-async_wrapper.py[51683]: Starting module and watcher
Jan 20 13:58:03 np0005589310 ansible-async_wrapper.py[51683]: Start watching 51684 (300)
Jan 20 13:58:03 np0005589310 ansible-async_wrapper.py[51684]: Start module (51684)
Jan 20 13:58:03 np0005589310 ansible-async_wrapper.py[51680]: Return async_wrapper task started.
Jan 20 13:58:03 np0005589310 python3.9[51685]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 20 13:58:04 np0005589310 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 20 13:58:04 np0005589310 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 20 13:58:04 np0005589310 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 20 13:58:04 np0005589310 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 20 13:58:04 np0005589310 kernel: cfg80211: failed to load regulatory.db
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.3716] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.3734] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4191] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4192] audit: op="connection-add" uuid="b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f" name="br-ex-br" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4207] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4208] audit: op="connection-add" uuid="a30c3806-19c1-4774-9144-062e5e999330" name="br-ex-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4217] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4218] audit: op="connection-add" uuid="d84fac66-13fa-47e8-89b3-8ce25616c31c" name="eth1-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4228] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4229] audit: op="connection-add" uuid="2c1ba911-ad09-4bdc-985f-0b695ec2a13b" name="vlan20-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4239] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4240] audit: op="connection-add" uuid="c2689e70-8772-4d0c-9ef0-e140a0a893c7" name="vlan21-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4250] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4251] audit: op="connection-add" uuid="92954c86-a800-48d3-86f9-70f1d9766cca" name="vlan22-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4261] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4263] audit: op="connection-add" uuid="ce17356c-fe12-4caa-a7c1-55f601f4690b" name="vlan23-port" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4280] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4293] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4295] audit: op="connection-add" uuid="7628cce7-0f52-4351-b287-3dcb42e8f166" name="br-ex-if" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4349] audit: op="connection-update" uuid="3f70ede9-7960-5c64-9771-a2eedfd4d85a" name="ci-private-network" args="ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.addresses,ipv6.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv4.addresses,ipv4.method,ovs-interface.type,connection.controller,connection.slave-type,connection.port-type,connection.timestamp,connection.master,ovs-external-ids.data" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4364] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4365] audit: op="connection-add" uuid="bb30e430-d451-4997-93f3-7de1908603e7" name="vlan20-if" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4379] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4381] audit: op="connection-add" uuid="3aaca0df-ed3f-42e3-a752-631aacaa7601" name="vlan21-if" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4395] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4396] audit: op="connection-add" uuid="e4b39470-b83b-4f06-bda5-6893e6ab1573" name="vlan22-if" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4412] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4413] audit: op="connection-add" uuid="a9cea091-39e4-4b02-9e33-016e2f8116e5" name="vlan23-if" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4426] audit: op="connection-delete" uuid="fd33b000-20d4-3dcd-9e30-523cad9af7fa" name="Wired connection 1" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4438] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4441] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4448] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4451] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4451] audit: op="connection-activate" uuid="b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f" name="br-ex-br" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4452] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4453] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4456] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4459] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a30c3806-19c1-4774-9144-062e5e999330)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4460] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4461] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4464] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4467] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d84fac66-13fa-47e8-89b3-8ce25616c31c)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4468] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4468] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4472] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4475] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2c1ba911-ad09-4bdc-985f-0b695ec2a13b)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4476] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4477] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4480] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4483] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c2689e70-8772-4d0c-9ef0-e140a0a893c7)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4484] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4484] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4488] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4490] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (92954c86-a800-48d3-86f9-70f1d9766cca)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4491] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4492] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4496] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4499] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (ce17356c-fe12-4caa-a7c1-55f601f4690b)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4500] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4501] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4503] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4507] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4508] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4510] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4513] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (7628cce7-0f52-4351-b287-3dcb42e8f166)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4514] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4516] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4517] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4518] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4518] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4525] device (eth1): disconnecting for new activation request.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4526] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4528] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4529] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4530] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4532] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4532] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4534] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4537] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (bb30e430-d451-4997-93f3-7de1908603e7)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4537] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4539] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4541] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4542] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4544] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4544] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4546] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4549] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (3aaca0df-ed3f-42e3-a752-631aacaa7601)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4549] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4551] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4552] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4553] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4555] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4556] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4558] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4561] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e4b39470-b83b-4f06-bda5-6893e6ab1573)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4562] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4564] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4566] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4566] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4568] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <warn>  [1768935485.4569] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4571] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4574] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (a9cea091-39e4-4b02-9e33-016e2f8116e5)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4575] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4576] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4578] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4578] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4580] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4589] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4591] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4593] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4595] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4600] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4603] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4606] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4608] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4609] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4612] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4616] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4618] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4620] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 kernel: ovs-system: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4633] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4637] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4639] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4641] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4645] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4648] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4650] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4652] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4656] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4659] dhcp4 (eth0): canceled DHCP transaction
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4660] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4660] dhcp4 (eth0): state changed no lease
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4661] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 20 13:58:05 np0005589310 systemd-udevd[51690]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:58:05 np0005589310 kernel: Timeout policy base is empty
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4670] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4673] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51686 uid=0 result="fail" reason="Device is not activated"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4706] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4709] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4745] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4750] device (eth1): disconnecting for new activation request.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4751] audit: op="connection-activate" uuid="3f70ede9-7960-5c64-9771-a2eedfd4d85a" name="ci-private-network" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4752] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4756] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4783] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 13:58:05 np0005589310 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.4912] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 20 13:58:05 np0005589310 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 13:58:05 np0005589310 kernel: br-ex: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5262] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5268] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5279] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5283] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5289] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5293] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5304] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5306] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5307] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5308] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5310] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5311] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5328] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5337] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5340] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5344] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5348] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5354] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5358] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5363] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5367] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5372] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5376] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5380] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5384] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5392] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5398] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5406] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5414] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5422] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 kernel: vlan22: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5428] device (eth1): Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5443] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5478] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5480] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 kernel: vlan21: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5484] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 systemd-udevd[51691]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5518] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5532] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5546] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5548] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5553] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 kernel: vlan23: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5615] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 kernel: vlan20: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5634] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5637] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5641] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5731] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5743] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5754] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5771] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5781] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5782] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5789] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5798] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5799] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 13:58:05 np0005589310 NetworkManager[48913]: <info>  [1768935485.5804] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 13:58:06 np0005589310 NetworkManager[48913]: <info>  [1768935486.7013] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 13:58:06 np0005589310 NetworkManager[48913]: <info>  [1768935486.8326] checkpoint[0x55d9d0016950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 20 13:58:06 np0005589310 NetworkManager[48913]: <info>  [1768935486.8328] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.0752] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.0760] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.2510] audit: op="networking-control" arg="global-dns-configuration" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.2541] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.2570] audit: op="networking-control" arg="global-dns-configuration" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.2589] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 python3.9[52045]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=status _async_dir=/root/.ansible_async
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.3783] checkpoint[0x55d9d0016a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 20 13:58:07 np0005589310 NetworkManager[48913]: <info>  [1768935487.3788] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 13:58:07 np0005589310 ansible-async_wrapper.py[51684]: Module complete (51684)
Jan 20 13:58:08 np0005589310 ansible-async_wrapper.py[51683]: Done in kid B.
Jan 20 13:58:10 np0005589310 python3.9[52149]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=status _async_dir=/root/.ansible_async
Jan 20 13:58:11 np0005589310 python3.9[52249]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 13:58:11 np0005589310 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 13:58:11 np0005589310 python3.9[52401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:12 np0005589310 python3.9[52526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935491.5024483-317-195957931786002/.source.returncode _original_basename=.h4a5dbis follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:13 np0005589310 python3.9[52680]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:13 np0005589310 python3.9[52803]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935492.615944-333-144128445078362/.source.cfg _original_basename=.m__ydcdp follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:14 np0005589310 python3.9[52955]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:58:14 np0005589310 systemd[1]: Reloading Network Manager...
Jan 20 13:58:14 np0005589310 NetworkManager[48913]: <info>  [1768935494.2396] audit: op="reload" arg="0" pid=52960 uid=0 result="success"
Jan 20 13:58:14 np0005589310 NetworkManager[48913]: <info>  [1768935494.2406] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 20 13:58:14 np0005589310 systemd[1]: Reloaded Network Manager.
Jan 20 13:58:14 np0005589310 systemd[1]: session-10.scope: Deactivated successfully.
Jan 20 13:58:14 np0005589310 systemd[1]: session-10.scope: Consumed 50.193s CPU time.
Jan 20 13:58:14 np0005589310 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Jan 20 13:58:14 np0005589310 systemd-logind[797]: Removed session 10.
Jan 20 13:58:21 np0005589310 systemd-logind[797]: New session 11 of user zuul.
Jan 20 13:58:21 np0005589310 systemd[1]: Started Session 11 of User zuul.
Jan 20 13:58:22 np0005589310 python3.9[53144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:58:23 np0005589310 python3.9[53298]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:58:24 np0005589310 python3.9[53492]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:24 np0005589310 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 13:58:24 np0005589310 systemd[1]: session-11.scope: Deactivated successfully.
Jan 20 13:58:24 np0005589310 systemd[1]: session-11.scope: Consumed 2.167s CPU time.
Jan 20 13:58:24 np0005589310 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Jan 20 13:58:24 np0005589310 systemd-logind[797]: Removed session 11.
Jan 20 13:58:30 np0005589310 systemd-logind[797]: New session 12 of user zuul.
Jan 20 13:58:30 np0005589310 systemd[1]: Started Session 12 of User zuul.
Jan 20 13:58:31 np0005589310 python3.9[53674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:58:32 np0005589310 python3.9[53828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:58:33 np0005589310 python3.9[53984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:58:33 np0005589310 python3.9[54069]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:58:36 np0005589310 python3.9[54222]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:58:37 np0005589310 python3.9[54418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:37 np0005589310 python3.9[54572]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:38 np0005589310 systemd[1]: var-lib-containers-storage-overlay-compat3898173099-merged.mount: Deactivated successfully.
Jan 20 13:58:38 np0005589310 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1129975980-merged.mount: Deactivated successfully.
Jan 20 13:58:38 np0005589310 podman[54573]: 2026-01-20 18:58:38.027319029 +0000 UTC m=+0.052505213 system refresh
Jan 20 13:58:38 np0005589310 python3.9[54735]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:39 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 13:58:39 np0005589310 python3.9[54858]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935518.1998527-74-174261014931238/.source.json follow=False _original_basename=podman_network_config.j2 checksum=db153e063bda690dbde9b625a14eb97c349f5d6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:40 np0005589310 python3.9[55010]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:40 np0005589310 python3.9[55133]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935519.6916678-89-168550325786552/.source.conf follow=False _original_basename=registries.conf.j2 checksum=231117e605c41d48bc567c0404cb51471711010a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:58:41 np0005589310 python3.9[55285]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:58:41 np0005589310 python3.9[55437]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:58:42 np0005589310 python3.9[55589]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:58:43 np0005589310 python3.9[55741]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:58:43 np0005589310 python3.9[55893]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:58:45 np0005589310 python3.9[56046]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:58:46 np0005589310 python3.9[56200]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:58:47 np0005589310 python3.9[56352]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 13:58:47 np0005589310 python3.9[56504]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:58:48 np0005589310 python3.9[56657]: ansible-service_facts Invoked
Jan 20 13:58:48 np0005589310 network[56674]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:58:48 np0005589310 network[56675]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:58:48 np0005589310 network[56676]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:58:54 np0005589310 python3.9[57128]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 13:58:56 np0005589310 python3.9[57281]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 13:58:57 np0005589310 python3.9[57433]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:58 np0005589310 python3.9[57558]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935537.3010283-233-10049290404278/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:58:59 np0005589310 python3.9[57712]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:58:59 np0005589310 python3.9[57837]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935538.6792006-248-187676939680742/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:00 np0005589310 python3.9[57991]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:01 np0005589310 python3.9[58145]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:59:02 np0005589310 python3.9[58229]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:03 np0005589310 python3.9[58383]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 13:59:04 np0005589310 python3.9[58467]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:59:04 np0005589310 chronyd[784]: chronyd exiting
Jan 20 13:59:04 np0005589310 systemd[1]: Stopping NTP client/server...
Jan 20 13:59:04 np0005589310 systemd[1]: chronyd.service: Deactivated successfully.
Jan 20 13:59:04 np0005589310 systemd[1]: Stopped NTP client/server.
Jan 20 13:59:04 np0005589310 systemd[1]: Starting NTP client/server...
Jan 20 13:59:04 np0005589310 chronyd[58476]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 13:59:04 np0005589310 chronyd[58476]: Frequency -23.108 +/- 0.486 ppm read from /var/lib/chrony/drift
Jan 20 13:59:04 np0005589310 chronyd[58476]: Loaded seccomp filter (level 2)
Jan 20 13:59:04 np0005589310 systemd[1]: Started NTP client/server.
Jan 20 13:59:05 np0005589310 systemd[1]: session-12.scope: Deactivated successfully.
Jan 20 13:59:05 np0005589310 systemd[1]: session-12.scope: Consumed 24.653s CPU time.
Jan 20 13:59:05 np0005589310 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Jan 20 13:59:05 np0005589310 systemd-logind[797]: Removed session 12.
Jan 20 13:59:10 np0005589310 systemd-logind[797]: New session 13 of user zuul.
Jan 20 13:59:10 np0005589310 systemd[1]: Started Session 13 of User zuul.
Jan 20 13:59:11 np0005589310 python3.9[58657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:11 np0005589310 python3.9[58809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:12 np0005589310 python3.9[58932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935551.3144825-29-142132317846135/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:12 np0005589310 systemd[1]: session-13.scope: Deactivated successfully.
Jan 20 13:59:12 np0005589310 systemd[1]: session-13.scope: Consumed 1.531s CPU time.
Jan 20 13:59:12 np0005589310 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Jan 20 13:59:12 np0005589310 systemd-logind[797]: Removed session 13.
Jan 20 13:59:18 np0005589310 systemd-logind[797]: New session 14 of user zuul.
Jan 20 13:59:18 np0005589310 systemd[1]: Started Session 14 of User zuul.
Jan 20 13:59:19 np0005589310 python3.9[59110]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 13:59:21 np0005589310 python3.9[59266]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:22 np0005589310 python3.9[59441]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:22 np0005589310 python3.9[59564]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768935561.4677958-36-64825458555269/.source.json _original_basename=.zthxwr2y follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:23 np0005589310 python3.9[59716]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:24 np0005589310 python3.9[59839]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935563.234755-59-85937451567981/.source _original_basename=.bmejx97j follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:24 np0005589310 python3.9[59991]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:59:25 np0005589310 python3.9[60143]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:26 np0005589310 python3.9[60266]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935565.1422527-83-156714611444195/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:59:26 np0005589310 python3.9[60418]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:27 np0005589310 python3.9[60541]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935566.2098558-83-13006947650402/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 13:59:28 np0005589310 python3.9[60693]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:28 np0005589310 python3.9[60845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:29 np0005589310 python3.9[60968]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935568.2058494-120-113888982139769/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:29 np0005589310 python3.9[61120]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:30 np0005589310 python3.9[61243]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935569.396869-135-146289972984354/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:31 np0005589310 python3.9[61395]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:31 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:31 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:31 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:31 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:31 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:31 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:31 np0005589310 systemd[1]: Starting EDPM Container Shutdown...
Jan 20 13:59:31 np0005589310 systemd[1]: Finished EDPM Container Shutdown.
Jan 20 13:59:32 np0005589310 python3.9[61622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:33 np0005589310 python3.9[61745]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935572.0418088-158-244583400376586/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:33 np0005589310 python3.9[61897]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:34 np0005589310 python3.9[62020]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935573.2934673-173-214388186955370/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:34 np0005589310 python3.9[62172]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:35 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:35 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:35 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:35 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:35 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:35 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:35 np0005589310 systemd[1]: Starting Create netns directory...
Jan 20 13:59:35 np0005589310 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 13:59:35 np0005589310 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 13:59:35 np0005589310 systemd[1]: Finished Create netns directory.
Jan 20 13:59:36 np0005589310 python3.9[62398]: ansible-ansible.builtin.service_facts Invoked
Jan 20 13:59:36 np0005589310 network[62415]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 13:59:36 np0005589310 network[62416]: 'network-scripts' will be removed from distribution in near future.
Jan 20 13:59:36 np0005589310 network[62417]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 13:59:39 np0005589310 python3.9[62679]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:39 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:39 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:39 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:39 np0005589310 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 20 13:59:40 np0005589310 iptables.init[62720]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 20 13:59:40 np0005589310 iptables.init[62720]: iptables: Flushing firewall rules: [  OK  ]
Jan 20 13:59:40 np0005589310 systemd[1]: iptables.service: Deactivated successfully.
Jan 20 13:59:40 np0005589310 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 20 13:59:40 np0005589310 python3.9[62916]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:41 np0005589310 python3.9[63070]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 13:59:41 np0005589310 systemd[1]: Reloading.
Jan 20 13:59:41 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 13:59:41 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 13:59:41 np0005589310 systemd[1]: Starting Netfilter Tables...
Jan 20 13:59:41 np0005589310 systemd[1]: Finished Netfilter Tables.
Jan 20 13:59:42 np0005589310 python3.9[63262]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:43 np0005589310 python3.9[63415]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:44 np0005589310 python3.9[63540]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935583.1455934-242-207465435754999/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:44 np0005589310 python3.9[63693]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 13:59:44 np0005589310 systemd[1]: Reloading OpenSSH server daemon...
Jan 20 13:59:44 np0005589310 systemd[1]: Reloaded OpenSSH server daemon.
Jan 20 13:59:45 np0005589310 python3.9[63849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:46 np0005589310 python3.9[64001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:46 np0005589310 python3.9[64124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935585.778155-273-70952657120155/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:47 np0005589310 python3.9[64276]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 13:59:47 np0005589310 systemd[1]: Starting Time & Date Service...
Jan 20 13:59:47 np0005589310 systemd[1]: Started Time & Date Service.
Jan 20 13:59:49 np0005589310 python3.9[64432]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:50 np0005589310 python3.9[64584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:50 np0005589310 python3.9[64707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935589.8355293-308-141311201081625/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:51 np0005589310 python3.9[64859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:51 np0005589310 python3.9[64982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935591.0020988-323-111044625505526/.source.yaml _original_basename=.g8ej4bjq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:52 np0005589310 python3.9[65134]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:53 np0005589310 python3.9[65257]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935592.1043568-338-211352668903382/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:53 np0005589310 python3.9[65409]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:54 np0005589310 python3.9[65562]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 13:59:55 np0005589310 python3[65715]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 13:59:55 np0005589310 python3.9[65867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:56 np0005589310 python3.9[65990]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935595.4144685-377-157423801836268/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:57 np0005589310 python3.9[66142]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:57 np0005589310 python3.9[66265]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935596.6032147-392-90019042857631/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:58 np0005589310 python3.9[66417]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:58 np0005589310 python3.9[66540]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935597.753688-407-266459316976050/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 13:59:59 np0005589310 python3.9[66692]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 13:59:59 np0005589310 python3.9[66815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935598.8526566-422-80215271890943/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:00 np0005589310 python3.9[66967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:00:00 np0005589310 python3.9[67090]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935599.9296067-437-103043543290663/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:01 np0005589310 python3.9[67242]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:02 np0005589310 python3.9[67394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:02 np0005589310 python3.9[67553]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:03 np0005589310 python3.9[67706]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:04 np0005589310 python3.9[67858]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:05 np0005589310 python3.9[68010]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:00:05 np0005589310 python3.9[68163]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:00:06 np0005589310 systemd[1]: session-14.scope: Deactivated successfully.
Jan 20 14:00:06 np0005589310 systemd[1]: session-14.scope: Consumed 35.121s CPU time.
Jan 20 14:00:06 np0005589310 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Jan 20 14:00:06 np0005589310 systemd-logind[797]: Removed session 14.
Jan 20 14:00:11 np0005589310 systemd-logind[797]: New session 15 of user zuul.
Jan 20 14:00:11 np0005589310 systemd[1]: Started Session 15 of User zuul.
Jan 20 14:00:11 np0005589310 python3.9[68344]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 14:00:12 np0005589310 python3.9[68496]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:13 np0005589310 python3.9[68648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:14 np0005589310 python3.9[68800]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz3b07HV3uJtYZS5SXFV7UOV5We+VhL7E4MInSTY31YDxLu74UtLEKRyupRLnE9d5cVG8e5JHiBt72dhLY2VbhACUUzWUR1aTUO/jAfEzM97GQgzgl5skY63LeYydonq3csjRREkj9YaliQuWdLTocUhfB/0t0HX525BkLTzTfdhjhDOY6NzeJUhZjMKy9uM/RZvITLdPgnYTjcLN12hAtWjUGKvAcUEfWpRW0efbUgaPSuNuRxZWXNuusp0UBopS1fv5P4Ea0VhwUmNZ0IJC3eljfUuHXRdQr6A4px/e8yVSwUILaYNL6ettCVX8HNvIxk6xmT5clWgr+Vibu+qnmAoOdOqoRYdZgH/26kU5ZMOYv8wpa/TUoXbD1ClrmNUQNjD4kSFXQtI1uhLxuNYTzf4ftLLy92oo3ENBg4Oph0Hw00CUPNDcsAgD65KYg8/Frjms4h8AUjYrV2ktrqAPVEvcItbD5e7/cAcF1AnB9aHpNzgUo1iUbMmXN2/I/fQ0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5Jhg8QlHJt93+bopoKxGN+UwIsXQojyFhcp0nCuLCA#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNoSkRzTUMXF81nHL5zY2fe7DfBkbvi2MFoFs1WurMuV9pkgr/kpqf2yHrz5D04ncV4FFj7hs+/ZPi7NjXPcIw=#012 create=True mode=0644 path=/tmp/ansible.p6ziftnk state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:14 np0005589310 python3.9[68952]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.p6ziftnk' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:15 np0005589310 python3.9[69106]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.p6ziftnk state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:16 np0005589310 systemd[1]: session-15.scope: Deactivated successfully.
Jan 20 14:00:16 np0005589310 systemd[1]: session-15.scope: Consumed 3.139s CPU time.
Jan 20 14:00:16 np0005589310 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Jan 20 14:00:16 np0005589310 systemd-logind[797]: Removed session 15.
Jan 20 14:00:17 np0005589310 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 14:00:20 np0005589310 systemd-logind[797]: New session 16 of user zuul.
Jan 20 14:00:20 np0005589310 systemd[1]: Started Session 16 of User zuul.
Jan 20 14:00:21 np0005589310 python3.9[69287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:23 np0005589310 python3.9[69443]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:00:23 np0005589310 python3.9[69597]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:00:24 np0005589310 python3.9[69750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:25 np0005589310 python3.9[69903]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:26 np0005589310 python3.9[70057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:26 np0005589310 python3.9[70212]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:27 np0005589310 systemd[1]: session-16.scope: Deactivated successfully.
Jan 20 14:00:27 np0005589310 systemd[1]: session-16.scope: Consumed 4.289s CPU time.
Jan 20 14:00:27 np0005589310 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Jan 20 14:00:27 np0005589310 systemd-logind[797]: Removed session 16.
Jan 20 14:00:32 np0005589310 systemd-logind[797]: New session 17 of user zuul.
Jan 20 14:00:32 np0005589310 systemd[1]: Started Session 17 of User zuul.
Jan 20 14:00:33 np0005589310 python3.9[70392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:34 np0005589310 python3.9[70548]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:00:34 np0005589310 python3.9[70632]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:00:36 np0005589310 python3.9[70783]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:38 np0005589310 python3.9[70934]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:00:38 np0005589310 python3.9[71084]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:38 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:00:38 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:00:39 np0005589310 python3.9[71235]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:00:40 np0005589310 systemd[1]: session-17.scope: Deactivated successfully.
Jan 20 14:00:40 np0005589310 systemd[1]: session-17.scope: Consumed 5.914s CPU time.
Jan 20 14:00:40 np0005589310 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Jan 20 14:00:40 np0005589310 systemd-logind[797]: Removed session 17.
Jan 20 14:00:48 np0005589310 systemd-logind[797]: New session 18 of user zuul.
Jan 20 14:00:48 np0005589310 systemd[1]: Started Session 18 of User zuul.
Jan 20 14:00:54 np0005589310 python3[72001]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:00:56 np0005589310 python3[72096]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 14:00:57 np0005589310 python3[72123]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:00:57 np0005589310 python3[72149]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:57 np0005589310 kernel: loop: module loaded
Jan 20 14:00:57 np0005589310 kernel: loop3: detected capacity change from 0 to 41943040
Jan 20 14:00:58 np0005589310 python3[72184]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:00:58 np0005589310 lvm[72187]: PV /dev/loop3 not used.
Jan 20 14:00:58 np0005589310 lvm[72196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:00:58 np0005589310 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 20 14:00:58 np0005589310 lvm[72198]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 20 14:00:58 np0005589310 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 20 14:00:58 np0005589310 python3[72276]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:00:59 np0005589310 python3[72349]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935658.701823-36189-20334441519892/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:00:59 np0005589310 python3[72399]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:00 np0005589310 systemd[1]: Reloading.
Jan 20 14:01:00 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:01:00 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:01:00 np0005589310 systemd[1]: Starting Ceph OSD losetup...
Jan 20 14:01:00 np0005589310 bash[72440]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Jan 20 14:01:00 np0005589310 systemd[1]: Finished Ceph OSD losetup.
Jan 20 14:01:00 np0005589310 lvm[72441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:01:00 np0005589310 lvm[72441]: VG ceph_vg0 finished
Jan 20 14:01:00 np0005589310 python3[72467]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 14:01:02 np0005589310 python3[72509]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:02 np0005589310 python3[72535]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:02 np0005589310 kernel: loop4: detected capacity change from 0 to 41943040
Jan 20 14:01:03 np0005589310 python3[72567]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:03 np0005589310 lvm[72570]: PV /dev/loop4 not used.
Jan 20 14:01:03 np0005589310 lvm[72572]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:01:03 np0005589310 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 20 14:01:03 np0005589310 lvm[72583]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:01:03 np0005589310 lvm[72583]: VG ceph_vg1 finished
Jan 20 14:01:03 np0005589310 lvm[72581]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 20 14:01:03 np0005589310 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 20 14:01:03 np0005589310 python3[72661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:01:04 np0005589310 python3[72734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935663.487693-36231-256721410467085/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:04 np0005589310 python3[72784]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:04 np0005589310 systemd[1]: Reloading.
Jan 20 14:01:04 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:01:04 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:01:05 np0005589310 systemd[1]: Starting Ceph OSD losetup...
Jan 20 14:01:05 np0005589310 bash[72824]: /dev/loop4: [64513]:4328577 (/var/lib/ceph-osd-1.img)
Jan 20 14:01:05 np0005589310 systemd[1]: Finished Ceph OSD losetup.
Jan 20 14:01:05 np0005589310 lvm[72825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:01:05 np0005589310 lvm[72825]: VG ceph_vg1 finished
Jan 20 14:01:05 np0005589310 python3[72851]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 14:01:07 np0005589310 python3[72878]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:07 np0005589310 python3[72904]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:07 np0005589310 kernel: loop5: detected capacity change from 0 to 41943040
Jan 20 14:01:07 np0005589310 python3[72936]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:07 np0005589310 lvm[72939]: PV /dev/loop5 not used.
Jan 20 14:01:08 np0005589310 lvm[72949]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:01:08 np0005589310 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 20 14:01:08 np0005589310 lvm[72951]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 20 14:01:08 np0005589310 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 20 14:01:08 np0005589310 python3[73029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:01:08 np0005589310 python3[73102]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935668.2830946-36258-245732525978810/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:09 np0005589310 python3[73152]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:01:09 np0005589310 systemd[1]: Reloading.
Jan 20 14:01:09 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:01:09 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:01:09 np0005589310 systemd[1]: Starting Ceph OSD losetup...
Jan 20 14:01:09 np0005589310 bash[73192]: /dev/loop5: [64513]:4328578 (/var/lib/ceph-osd-2.img)
Jan 20 14:01:09 np0005589310 systemd[1]: Finished Ceph OSD losetup.
Jan 20 14:01:09 np0005589310 lvm[73193]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:01:09 np0005589310 lvm[73193]: VG ceph_vg2 finished
Jan 20 14:01:11 np0005589310 python3[73217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:01:14 np0005589310 python3[73310]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 14:01:14 np0005589310 chronyd[58476]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 20 14:01:16 np0005589310 python3[73367]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 14:01:20 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:01:20 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:01:20 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:01:20 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:01:20 np0005589310 systemd[1]: run-ra74deb57212a4314bb94d8f7b5985e13.service: Deactivated successfully.
Jan 20 14:01:20 np0005589310 python3[73486]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:21 np0005589310 python3[73514]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:21 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:01:21 np0005589310 python3[73554]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:22 np0005589310 python3[73580]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:23 np0005589310 python3[73658]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:01:23 np0005589310 python3[73731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935682.983403-36406-268475204916456/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:24 np0005589310 python3[73833]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:01:24 np0005589310 python3[73906]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935684.1594503-36424-130976749764674/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:01:25 np0005589310 python3[73956]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:25 np0005589310 python3[73984]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:25 np0005589310 python3[74012]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:26 np0005589310 python3[74038]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:01:26 np0005589310 python3[74064]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:01:26 np0005589310 systemd-logind[797]: New session 19 of user ceph-admin.
Jan 20 14:01:26 np0005589310 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 14:01:26 np0005589310 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 14:01:26 np0005589310 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 14:01:26 np0005589310 systemd[1]: Starting User Manager for UID 42477...
Jan 20 14:01:26 np0005589310 systemd[74072]: Queued start job for default target Main User Target.
Jan 20 14:01:26 np0005589310 systemd[74072]: Created slice User Application Slice.
Jan 20 14:01:26 np0005589310 systemd[74072]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:01:26 np0005589310 systemd[74072]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:01:26 np0005589310 systemd[74072]: Reached target Paths.
Jan 20 14:01:26 np0005589310 systemd[74072]: Reached target Timers.
Jan 20 14:01:26 np0005589310 systemd[74072]: Starting D-Bus User Message Bus Socket...
Jan 20 14:01:26 np0005589310 systemd[74072]: Starting Create User's Volatile Files and Directories...
Jan 20 14:01:26 np0005589310 systemd[74072]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:01:26 np0005589310 systemd[74072]: Reached target Sockets.
Jan 20 14:01:26 np0005589310 systemd[74072]: Finished Create User's Volatile Files and Directories.
Jan 20 14:01:26 np0005589310 systemd[74072]: Reached target Basic System.
Jan 20 14:01:26 np0005589310 systemd[74072]: Reached target Main User Target.
Jan 20 14:01:26 np0005589310 systemd[74072]: Startup finished in 128ms.
Jan 20 14:01:26 np0005589310 systemd[1]: Started User Manager for UID 42477.
Jan 20 14:01:26 np0005589310 systemd[1]: Started Session 19 of User ceph-admin.
Jan 20 14:01:27 np0005589310 systemd[1]: session-19.scope: Deactivated successfully.
Jan 20 14:01:27 np0005589310 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Jan 20 14:01:27 np0005589310 systemd-logind[797]: Removed session 19.
Jan 20 14:01:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:01:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:01:30 np0005589310 systemd[1]: var-lib-containers-storage-overlay-compat812151866-lower\x2dmapped.mount: Deactivated successfully.
Jan 20 14:01:37 np0005589310 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 14:01:37 np0005589310 systemd[74072]: Activating special unit Exit the Session...
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped target Main User Target.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped target Basic System.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped target Paths.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped target Sockets.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped target Timers.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:01:37 np0005589310 systemd[74072]: Closed D-Bus User Message Bus Socket.
Jan 20 14:01:37 np0005589310 systemd[74072]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:01:37 np0005589310 systemd[74072]: Removed slice User Application Slice.
Jan 20 14:01:37 np0005589310 systemd[74072]: Reached target Shutdown.
Jan 20 14:01:37 np0005589310 systemd[74072]: Finished Exit the Session.
Jan 20 14:01:37 np0005589310 systemd[74072]: Reached target Exit the Session.
Jan 20 14:01:37 np0005589310 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 14:01:37 np0005589310 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 14:01:37 np0005589310 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 14:01:37 np0005589310 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 14:01:37 np0005589310 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 14:01:37 np0005589310 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 14:01:37 np0005589310 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 14:02:00 np0005589310 podman[74166]: 2026-01-20 19:02:00.332834297 +0000 UTC m=+33.028037946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.404016091 +0000 UTC m=+0.043177448 container create d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:00 np0005589310 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 20 14:02:00 np0005589310 systemd[1]: Started libpod-conmon-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope.
Jan 20 14:02:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.381622078 +0000 UTC m=+0.020783465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.503999664 +0000 UTC m=+0.143161051 container init d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.512020505 +0000 UTC m=+0.151181872 container start d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.521621203 +0000 UTC m=+0.160782590 container attach d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:00 np0005589310 adoring_lichterman[74248]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 20 14:02:00 np0005589310 systemd[1]: libpod-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.610926041 +0000 UTC m=+0.250087408 container died d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c863a7b0566fd79ba2055cf6570b60b23973452636fc0d389290f6aecd555258-merged.mount: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74232]: 2026-01-20 19:02:00.692199637 +0000 UTC m=+0.331361004 container remove d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:00 np0005589310 systemd[1]: libpod-conmon-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.75823364 +0000 UTC m=+0.042537314 container create a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:00 np0005589310 systemd[1]: Started libpod-conmon-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope.
Jan 20 14:02:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.814660664 +0000 UTC m=+0.098964338 container init a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.81994714 +0000 UTC m=+0.104250814 container start a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:02:00 np0005589310 fervent_hofstadter[74284]: 167 167
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.82328459 +0000 UTC m=+0.107588294 container attach a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:02:00 np0005589310 systemd[1]: libpod-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.823891364 +0000 UTC m=+0.108195038 container died a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.740636451 +0000 UTC m=+0.024940145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:00 np0005589310 podman[74268]: 2026-01-20 19:02:00.859415951 +0000 UTC m=+0.143719625 container remove a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:00 np0005589310 systemd[1]: libpod-conmon-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.913109789 +0000 UTC m=+0.036459189 container create 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:02:00 np0005589310 systemd[1]: Started libpod-conmon-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope.
Jan 20 14:02:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.972636108 +0000 UTC m=+0.095985528 container init 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.976778285 +0000 UTC m=+0.100127675 container start 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.979746567 +0000 UTC m=+0.103095977 container attach 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.896495813 +0000 UTC m=+0.019845243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:00 np0005589310 sweet_proskuriakova[74317]: AQAo0W9ppgNROxAAFxHumGgWP6WYQuYuigzcLw==
Jan 20 14:02:00 np0005589310 systemd[1]: libpod-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope: Deactivated successfully.
Jan 20 14:02:00 np0005589310 podman[74301]: 2026-01-20 19:02:00.9979091 +0000 UTC m=+0.121258520 container died 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:02:01 np0005589310 podman[74301]: 2026-01-20 19:02:01.040177086 +0000 UTC m=+0.163526486 container remove 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 20 14:02:01 np0005589310 systemd[1]: libpod-conmon-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope: Deactivated successfully.
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.094288365 +0000 UTC m=+0.037388751 container create 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:02:01 np0005589310 systemd[1]: Started libpod-conmon-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope.
Jan 20 14:02:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.147642887 +0000 UTC m=+0.090743293 container init 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.152204895 +0000 UTC m=+0.095305281 container start 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.155471493 +0000 UTC m=+0.098571869 container attach 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:02:01 np0005589310 sleepy_matsumoto[74352]: AQAp0W9pdA41ChAAUIj17MqnTKZ3KpckutYrKw==
Jan 20 14:02:01 np0005589310 systemd[1]: libpod-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope: Deactivated successfully.
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.078247683 +0000 UTC m=+0.021348089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.174782843 +0000 UTC m=+0.117883239 container died 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:01 np0005589310 podman[74336]: 2026-01-20 19:02:01.213897085 +0000 UTC m=+0.156997471 container remove 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:01 np0005589310 systemd[1]: libpod-conmon-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope: Deactivated successfully.
Jan 20 14:02:01 np0005589310 podman[74372]: 2026-01-20 19:02:01.268835533 +0000 UTC m=+0.034837411 container create 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:02:01 np0005589310 systemd[1]: Started libpod-conmon-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope.
Jan 20 14:02:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:01 np0005589310 podman[74372]: 2026-01-20 19:02:01.2544064 +0000 UTC m=+0.020408298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:01 np0005589310 podman[74372]: 2026-01-20 19:02:01.731942405 +0000 UTC m=+0.497944303 container init 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:02:01 np0005589310 podman[74372]: 2026-01-20 19:02:01.736756029 +0000 UTC m=+0.502757907 container start 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:02:01 np0005589310 happy_wing[74389]: AQAp0W9paFQhLRAA4Qay79dbXeAWrHBkldsHBg==
Jan 20 14:02:01 np0005589310 systemd[1]: libpod-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope: Deactivated successfully.
Jan 20 14:02:02 np0005589310 podman[74372]: 2026-01-20 19:02:02.658821473 +0000 UTC m=+1.424823391 container attach 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:02 np0005589310 podman[74372]: 2026-01-20 19:02:02.659430767 +0000 UTC m=+1.425432655 container died 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ce2845b4e62f0df10b99ee3c90aca1e9855972c38782fd1388d65621ffe9590a-merged.mount: Deactivated successfully.
Jan 20 14:02:02 np0005589310 podman[74372]: 2026-01-20 19:02:02.702819322 +0000 UTC m=+1.468821200 container remove 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:02:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:02 np0005589310 systemd[1]: libpod-conmon-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope: Deactivated successfully.
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.762639888 +0000 UTC m=+0.041639584 container create 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:02:02 np0005589310 systemd[1]: Started libpod-conmon-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope.
Jan 20 14:02:02 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:02 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac80bd592b842cab8e57c94dd8e9212da275dd34a419e608cdcb2cf569d97f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.830624248 +0000 UTC m=+0.109623974 container init 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.835399771 +0000 UTC m=+0.114399457 container start 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.741970625 +0000 UTC m=+0.020970341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.83872738 +0000 UTC m=+0.117727076 container attach 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:02:02 np0005589310 recursing_liskov[74430]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 20 14:02:02 np0005589310 recursing_liskov[74430]: setting min_mon_release = tentacle
Jan 20 14:02:02 np0005589310 recursing_liskov[74430]: /usr/bin/monmaptool: set fsid to 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:02 np0005589310 recursing_liskov[74430]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 20 14:02:02 np0005589310 systemd[1]: libpod-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope: Deactivated successfully.
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.868052698 +0000 UTC m=+0.147052394 container died 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:02:02 np0005589310 podman[74412]: 2026-01-20 19:02:02.899825486 +0000 UTC m=+0.178825182 container remove 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:02 np0005589310 systemd[1]: libpod-conmon-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope: Deactivated successfully.
Jan 20 14:02:02 np0005589310 podman[74448]: 2026-01-20 19:02:02.962448718 +0000 UTC m=+0.041076060 container create e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:02:03 np0005589310 systemd[1]: Started libpod-conmon-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope.
Jan 20 14:02:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:03.027535278 +0000 UTC m=+0.106162610 container init e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:03.03223673 +0000 UTC m=+0.110864062 container start e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:03.035231122 +0000 UTC m=+0.113858454 container attach e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:02.942520592 +0000 UTC m=+0.021147944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:03 np0005589310 systemd[1]: libpod-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope: Deactivated successfully.
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:03.142496516 +0000 UTC m=+0.221123848 container died e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:03 np0005589310 podman[74448]: 2026-01-20 19:02:03.180560763 +0000 UTC m=+0.259188105 container remove e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:02:03 np0005589310 systemd[1]: libpod-conmon-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope: Deactivated successfully.
Jan 20 14:02:03 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:03 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:03 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:03 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:03 np0005589310 systemd[1]: var-lib-containers-storage-overlay-31ac80bd592b842cab8e57c94dd8e9212da275dd34a419e608cdcb2cf569d97f-merged.mount: Deactivated successfully.
Jan 20 14:02:03 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:03 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:03 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:03 np0005589310 systemd[1]: Reached target All Ceph clusters and services.
Jan 20 14:02:03 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:03 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:03 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:03 np0005589310 systemd[1]: Reached target Ceph cluster 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:04 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:04 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:04 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:04 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:04 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:04 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:04 np0005589310 systemd[1]: Created slice Slice /system/ceph-90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:04 np0005589310 systemd[1]: Reached target System Time Set.
Jan 20 14:02:04 np0005589310 systemd[1]: Reached target System Time Synchronized.
Jan 20 14:02:04 np0005589310 systemd[1]: Starting Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:02:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:04 np0005589310 podman[74745]: 2026-01-20 19:02:04.76390148 +0000 UTC m=+0.044087530 container create 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 podman[74745]: 2026-01-20 19:02:04.824756541 +0000 UTC m=+0.104942631 container init 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:02:04 np0005589310 podman[74745]: 2026-01-20 19:02:04.831784168 +0000 UTC m=+0.111970218 container start 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:02:04 np0005589310 bash[74745]: 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48
Jan 20 14:02:04 np0005589310 podman[74745]: 2026-01-20 19:02:04.743399003 +0000 UTC m=+0.023585053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:04 np0005589310 systemd[1]: Started Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: pidfile_write: ignore empty --pid-file
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: load: jerasure load: lrc 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Git sha 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: DB SUMMARY
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: DB Session ID:  LCRON4T8QIWEFDE4R6FR
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                                     Options.env: 0x55fb0fdfe440
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                                Options.info_log: 0x55fb1131b3e0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                                 Options.wal_dir: 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                    Options.write_buffer_manager: 0x55fb1129a140
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                               Options.row_cache: None
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                              Options.wal_filter: None
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.wal_compression: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.max_background_jobs: 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Compression algorithms supported:
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kZSTD supported: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:           Options.merge_operator: 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:        Options.compaction_filter: None
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb112a6700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb1128b8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.compression: NoCompression
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.num_levels: 7
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a47071cc-b77a-49b8-9d53-e31f11fbdebb
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724885125, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724887419, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "LCRON4T8QIWEFDE4R6FR", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724887518, "job": 1, "event": "recovery_finished"}
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fb112b8e00
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: DB pointer 0x55fb11404000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb1128b8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@-1(???) e0 preinit fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 20 14:02:04 np0005589310 podman[74765]: 2026-01-20 19:02:04.920549322 +0000 UTC m=+0.052257245 container create 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : created 2026-01-20T19:02:02.864397+0000
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-20T19:02:03.069482Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).mds e1 new map
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-20T19:02:04:930609+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mkfs 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 14:02:04 np0005589310 ceph-mon[74764]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:04 np0005589310 systemd[1]: Started libpod-conmon-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope.
Jan 20 14:02:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:04 np0005589310 podman[74765]: 2026-01-20 19:02:04.899124902 +0000 UTC m=+0.030832845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:05 np0005589310 podman[74765]: 2026-01-20 19:02:05.004280027 +0000 UTC m=+0.135987950 container init 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:02:05 np0005589310 podman[74765]: 2026-01-20 19:02:05.010970686 +0000 UTC m=+0.142678649 container start 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:02:05 np0005589310 podman[74765]: 2026-01-20 19:02:05.015192297 +0000 UTC m=+0.146900240 container attach 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950315037' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:  cluster:
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    id:     90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    health: HEALTH_OK
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]: 
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:  services:
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    mon: 1 daemons, quorum compute-0 (age 0.267702s) [leader: compute-0]
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    mgr: no daemons active
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    osd: 0 osds: 0 up, 0 in
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]: 
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:  data:
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    pools:   0 pools, 0 pgs
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    objects: 0 objects, 0 B
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    usage:   0 B used, 0 B / 0 B avail
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]:    pgs:     
Jan 20 14:02:05 np0005589310 pedantic_lumiere[74820]: 
Jan 20 14:02:05 np0005589310 systemd[1]: libpod-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope: Deactivated successfully.
Jan 20 14:02:05 np0005589310 podman[74765]: 2026-01-20 19:02:05.212872576 +0000 UTC m=+0.344580499 container died 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:05 np0005589310 podman[74765]: 2026-01-20 19:02:05.252119221 +0000 UTC m=+0.383827144 container remove 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:05 np0005589310 systemd[1]: libpod-conmon-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope: Deactivated successfully.
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.318161054 +0000 UTC m=+0.044304666 container create 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:02:05 np0005589310 systemd[1]: Started libpod-conmon-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope.
Jan 20 14:02:05 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.298762961 +0000 UTC m=+0.024906593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.614878551 +0000 UTC m=+0.341022173 container init 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.620476375 +0000 UTC m=+0.346619987 container start 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.62527855 +0000 UTC m=+0.351422192 container attach 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 14:02:05 np0005589310 festive_cori[74875]: 
Jan 20 14:02:05 np0005589310 festive_cori[74875]: [global]
Jan 20 14:02:05 np0005589310 festive_cori[74875]: #011fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:05 np0005589310 festive_cori[74875]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 14:02:05 np0005589310 festive_cori[74875]: #011osd_crush_chooseleaf_type = 0
Jan 20 14:02:05 np0005589310 systemd[1]: libpod-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope: Deactivated successfully.
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.830024817 +0000 UTC m=+0.556168439 container died 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:05 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78-merged.mount: Deactivated successfully.
Jan 20 14:02:05 np0005589310 podman[74859]: 2026-01-20 19:02:05.865893451 +0000 UTC m=+0.592037063 container remove 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:02:05 np0005589310 systemd[1]: libpod-conmon-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope: Deactivated successfully.
Jan 20 14:02:05 np0005589310 podman[74913]: 2026-01-20 19:02:05.984099288 +0000 UTC m=+0.097802401 container create f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:02:05 np0005589310 ceph-mon[74764]: from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:05.909107211 +0000 UTC m=+0.022810354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:06 np0005589310 systemd[1]: Started libpod-conmon-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope.
Jan 20 14:02:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:06.061161723 +0000 UTC m=+0.174864836 container init f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:06.065979818 +0000 UTC m=+0.179682921 container start f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:06.070321381 +0000 UTC m=+0.184024504 container attach f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868618155' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:02:06 np0005589310 systemd[1]: libpod-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope: Deactivated successfully.
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:06.272887857 +0000 UTC m=+0.386590990 container died f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:02:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf-merged.mount: Deactivated successfully.
Jan 20 14:02:06 np0005589310 podman[74913]: 2026-01-20 19:02:06.458927119 +0000 UTC m=+0.572630272 container remove f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:02:06 np0005589310 systemd[1]: Stopping Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:02:06 np0005589310 systemd[1]: libpod-conmon-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope: Deactivated successfully.
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 14:02:06 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[74760]: 2026-01-20T19:02:06.679+0000 7f379ffc6640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 14:02:06 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[74760]: 2026-01-20T19:02:06.679+0000 7f379ffc6640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: mon.compute-0@0(leader) e1 shutdown
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 14:02:06 np0005589310 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 14:02:06 np0005589310 podman[74997]: 2026-01-20 19:02:06.700039552 +0000 UTC m=+0.056737242 container stop 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:06 np0005589310 podman[74997]: 2026-01-20 19:02:06.726021451 +0000 UTC m=+0.082719151 container died 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 14:02:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12-merged.mount: Deactivated successfully.
Jan 20 14:02:06 np0005589310 podman[74997]: 2026-01-20 19:02:06.784171136 +0000 UTC m=+0.140868816 container remove 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:02:06 np0005589310 bash[74997]: ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0
Jan 20 14:02:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 14:02:06 np0005589310 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mon.compute-0.service: Deactivated successfully.
Jan 20 14:02:06 np0005589310 systemd[1]: Stopped Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:06 np0005589310 systemd[1]: Starting Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:02:07 np0005589310 podman[75100]: 2026-01-20 19:02:07.13608465 +0000 UTC m=+0.040239540 container create b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 podman[75100]: 2026-01-20 19:02:07.194135402 +0000 UTC m=+0.098290302 container init b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:02:07 np0005589310 podman[75100]: 2026-01-20 19:02:07.200985325 +0000 UTC m=+0.105140205 container start b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:07 np0005589310 podman[75100]: 2026-01-20 19:02:07.119984626 +0000 UTC m=+0.024139536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: pidfile_write: ignore empty --pid-file
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: load: jerasure load: lrc 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Git sha 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: DB SUMMARY
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: DB Session ID:  09M3MP4DL9LGPOBMD17J
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                                     Options.env: 0x55eae18a0440
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                                Options.info_log: 0x55eae3cbfe80
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                                 Options.wal_dir: 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                    Options.write_buffer_manager: 0x55eae3d0a140
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                               Options.row_cache: None
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                              Options.wal_filter: None
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.wal_compression: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.max_background_jobs: 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Compression algorithms supported:
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kZSTD supported: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:           Options.merge_operator: 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:        Options.compaction_filter: None
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eae3d16a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55eae3cfb8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.compression: NoCompression
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.num_levels: 7
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a47071cc-b77a-49b8-9d53-e31f11fbdebb
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727243825, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727274709, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935727, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727274856, "job": 1, "event": "recovery_finished"}
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 20 14:02:07 np0005589310 bash[75100]: b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681
Jan 20 14:02:07 np0005589310 systemd[1]: Started Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55eae3d28e00
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: DB pointer 0x55eae3e72000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???) e1 preinit fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).mds e1 new map
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-20T19:02:04:930609+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : created 2026-01-20T19:02:02.864397+0000
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.410465445 +0000 UTC m=+0.111250971 container create c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.321988527 +0000 UTC m=+0.022774063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:07 np0005589310 systemd[1]: Started libpod-conmon-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope.
Jan 20 14:02:07 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.494238741 +0000 UTC m=+0.195024277 container init c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.50803376 +0000 UTC m=+0.208819286 container start c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.512613358 +0000 UTC m=+0.213398904 container attach c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 14:02:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 20 14:02:07 np0005589310 systemd[1]: libpod-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope: Deactivated successfully.
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.725158062 +0000 UTC m=+0.425943578 container died c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:02:07 np0005589310 podman[75144]: 2026-01-20 19:02:07.761533528 +0000 UTC m=+0.462319044 container remove c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:02:07 np0005589310 systemd[1]: libpod-conmon-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope: Deactivated successfully.
Jan 20 14:02:07 np0005589310 podman[75215]: 2026-01-20 19:02:07.852657899 +0000 UTC m=+0.057785987 container create 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:02:07 np0005589310 systemd[1]: Started libpod-conmon-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope.
Jan 20 14:02:07 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:07 np0005589310 podman[75215]: 2026-01-20 19:02:07.925104435 +0000 UTC m=+0.130232543 container init 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:07 np0005589310 podman[75215]: 2026-01-20 19:02:07.831626288 +0000 UTC m=+0.036754376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:07 np0005589310 podman[75215]: 2026-01-20 19:02:07.930101534 +0000 UTC m=+0.135229632 container start 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:02:07 np0005589310 podman[75215]: 2026-01-20 19:02:07.933785471 +0000 UTC m=+0.138913569 container attach 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:02:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 20 14:02:08 np0005589310 systemd[1]: libpod-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope: Deactivated successfully.
Jan 20 14:02:08 np0005589310 podman[75215]: 2026-01-20 19:02:08.174326932 +0000 UTC m=+0.379455020 container died 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:02:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861-merged.mount: Deactivated successfully.
Jan 20 14:02:08 np0005589310 podman[75215]: 2026-01-20 19:02:08.318639509 +0000 UTC m=+0.523767627 container remove 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:08 np0005589310 systemd[1]: libpod-conmon-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope: Deactivated successfully.
Jan 20 14:02:08 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:08 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:08 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:08 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:08 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:08 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:08 np0005589310 systemd[1]: Starting Ceph mgr.compute-0.meyjbf for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:02:09 np0005589310 podman[75398]: 2026-01-20 19:02:09.185872799 +0000 UTC m=+0.037870604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:09 np0005589310 podman[75398]: 2026-01-20 19:02:09.473477029 +0000 UTC m=+0.325474794 container create 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/lib/ceph/mgr/ceph-compute-0.meyjbf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 podman[75398]: 2026-01-20 19:02:09.559014517 +0000 UTC m=+0.411012382 container init 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:09 np0005589310 podman[75398]: 2026-01-20 19:02:09.565957622 +0000 UTC m=+0.417955427 container start 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:09 np0005589310 bash[75398]: 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e
Jan 20 14:02:09 np0005589310 systemd[1]: Started Ceph mgr.compute-0.meyjbf for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: pidfile_write: ignore empty --pid-file
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'alerts'
Jan 20 14:02:09 np0005589310 podman[75420]: 2026-01-20 19:02:09.743583324 +0000 UTC m=+0.107396840 container create aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'balancer'
Jan 20 14:02:09 np0005589310 podman[75420]: 2026-01-20 19:02:09.67375989 +0000 UTC m=+0.037573466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:09 np0005589310 systemd[1]: Started libpod-conmon-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope.
Jan 20 14:02:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:09 np0005589310 podman[75420]: 2026-01-20 19:02:09.841579558 +0000 UTC m=+0.205393154 container init aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:02:09 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'cephadm'
Jan 20 14:02:09 np0005589310 podman[75420]: 2026-01-20 19:02:09.855268455 +0000 UTC m=+0.219081971 container start aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:02:09 np0005589310 podman[75420]: 2026-01-20 19:02:09.859117896 +0000 UTC m=+0.222931442 container attach aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:02:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 14:02:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058033490' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 14:02:10 np0005589310 modest_lalande[75456]: 
Jan 20 14:02:10 np0005589310 modest_lalande[75456]: {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "health": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "status": "HEALTH_OK",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "checks": {},
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "mutes": []
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "election_epoch": 5,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "quorum": [
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        0
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    ],
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "quorum_names": [
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "compute-0"
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    ],
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "quorum_age": 2,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "monmap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "epoch": 1,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "min_mon_release_name": "tentacle",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_mons": 1
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "osdmap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "epoch": 1,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_osds": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_up_osds": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "osd_up_since": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_in_osds": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "osd_in_since": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_remapped_pgs": 0
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "pgmap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "pgs_by_state": [],
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_pgs": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_pools": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_objects": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "data_bytes": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "bytes_used": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "bytes_avail": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "bytes_total": 0
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "fsmap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "epoch": 1,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "by_rank": [],
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "up:standby": 0
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "mgrmap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "available": false,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "num_standbys": 0,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "modules": [
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:            "iostat",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:            "nfs"
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        ],
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "services": {}
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "servicemap": {
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "epoch": 1,
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:        "services": {}
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    },
Jan 20 14:02:10 np0005589310 modest_lalande[75456]:    "progress_events": {}
Jan 20 14:02:10 np0005589310 modest_lalande[75456]: }
Jan 20 14:02:10 np0005589310 systemd[1]: libpod-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope: Deactivated successfully.
Jan 20 14:02:10 np0005589310 podman[75420]: 2026-01-20 19:02:10.089392181 +0000 UTC m=+0.453205707 container died aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed-merged.mount: Deactivated successfully.
Jan 20 14:02:10 np0005589310 podman[75420]: 2026-01-20 19:02:10.131798141 +0000 UTC m=+0.495611657 container remove aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:10 np0005589310 systemd[1]: libpod-conmon-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope: Deactivated successfully.
Jan 20 14:02:10 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'crash'
Jan 20 14:02:10 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'dashboard'
Jan 20 14:02:11 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'devicehealth'
Jan 20 14:02:11 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 14:02:11 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 14:02:11 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 14:02:11 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]:  from numpy import show_config as show_numpy_config
Jan 20 14:02:11 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'influx'
Jan 20 14:02:11 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'insights'
Jan 20 14:02:11 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'iostat'
Jan 20 14:02:12 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'k8sevents'
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.179847189 +0000 UTC m=+0.020196783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.316463653 +0000 UTC m=+0.156813217 container create 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:12 np0005589310 systemd[1]: Started libpod-conmon-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope.
Jan 20 14:02:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.412977732 +0000 UTC m=+0.253327326 container init 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.417956011 +0000 UTC m=+0.258305585 container start 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.42460718 +0000 UTC m=+0.264956774 container attach 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:12 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'localpool'
Jan 20 14:02:12 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 14:02:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 14:02:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79746608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]: 
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]: {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "health": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "status": "HEALTH_OK",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "checks": {},
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "mutes": []
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "election_epoch": 5,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "quorum": [
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        0
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    ],
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "quorum_names": [
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "compute-0"
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    ],
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "quorum_age": 5,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "monmap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "epoch": 1,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "min_mon_release_name": "tentacle",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_mons": 1
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "osdmap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "epoch": 1,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_osds": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_up_osds": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "osd_up_since": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_in_osds": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "osd_in_since": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_remapped_pgs": 0
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "pgmap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "pgs_by_state": [],
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_pgs": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_pools": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_objects": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "data_bytes": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "bytes_used": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "bytes_avail": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "bytes_total": 0
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "fsmap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "epoch": 1,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "by_rank": [],
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "up:standby": 0
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "mgrmap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "available": false,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "num_standbys": 0,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "modules": [
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:            "iostat",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:            "nfs"
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        ],
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "services": {}
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "servicemap": {
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "epoch": 1,
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:        "services": {}
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    },
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]:    "progress_events": {}
Jan 20 14:02:12 np0005589310 optimistic_hofstadter[75519]: }
Jan 20 14:02:12 np0005589310 systemd[1]: libpod-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope: Deactivated successfully.
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.633411994 +0000 UTC m=+0.473761568 container died 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:02:12 np0005589310 systemd[1]: var-lib-containers-storage-overlay-774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c-merged.mount: Deactivated successfully.
Jan 20 14:02:12 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'mirroring'
Jan 20 14:02:12 np0005589310 podman[75504]: 2026-01-20 19:02:12.777707841 +0000 UTC m=+0.618057435 container remove 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:02:12 np0005589310 systemd[1]: libpod-conmon-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope: Deactivated successfully.
Jan 20 14:02:12 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'nfs'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'orchestrator'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'osd_support'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'progress'
Jan 20 14:02:13 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'prometheus'
Jan 20 14:02:14 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rbd_support'
Jan 20 14:02:14 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rgw'
Jan 20 14:02:14 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rook'
Jan 20 14:02:14 np0005589310 podman[75560]: 2026-01-20 19:02:14.880690447 +0000 UTC m=+0.075335856 container create 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:02:14 np0005589310 systemd[1]: Started libpod-conmon-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope.
Jan 20 14:02:14 np0005589310 podman[75560]: 2026-01-20 19:02:14.82963082 +0000 UTC m=+0.024276249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:14 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:14 np0005589310 podman[75560]: 2026-01-20 19:02:14.968335204 +0000 UTC m=+0.162980633 container init 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:02:14 np0005589310 podman[75560]: 2026-01-20 19:02:14.973127879 +0000 UTC m=+0.167773278 container start 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:02:14 np0005589310 podman[75560]: 2026-01-20 19:02:14.976463358 +0000 UTC m=+0.171108777 container attach 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'selftest'
Jan 20 14:02:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 14:02:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413646804' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]: 
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]: {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "health": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "status": "HEALTH_OK",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "checks": {},
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "mutes": []
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "election_epoch": 5,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "quorum": [
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        0
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    ],
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "quorum_names": [
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "compute-0"
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    ],
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "quorum_age": 7,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "monmap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "epoch": 1,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "min_mon_release_name": "tentacle",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_mons": 1
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "osdmap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "epoch": 1,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_osds": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_up_osds": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "osd_up_since": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_in_osds": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "osd_in_since": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_remapped_pgs": 0
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "pgmap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "pgs_by_state": [],
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_pgs": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_pools": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_objects": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "data_bytes": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "bytes_used": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "bytes_avail": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "bytes_total": 0
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "fsmap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "epoch": 1,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "by_rank": [],
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "up:standby": 0
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "mgrmap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "available": false,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "num_standbys": 0,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "modules": [
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:            "iostat",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:            "nfs"
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        ],
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "services": {}
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "servicemap": {
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "epoch": 1,
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:        "services": {}
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    },
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]:    "progress_events": {}
Jan 20 14:02:15 np0005589310 eloquent_mestorf[75577]: }
Jan 20 14:02:15 np0005589310 systemd[1]: libpod-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope: Deactivated successfully.
Jan 20 14:02:15 np0005589310 podman[75560]: 2026-01-20 19:02:15.178553583 +0000 UTC m=+0.373198972 container died 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'smb'
Jan 20 14:02:15 np0005589310 systemd[1]: var-lib-containers-storage-overlay-32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f-merged.mount: Deactivated successfully.
Jan 20 14:02:15 np0005589310 podman[75560]: 2026-01-20 19:02:15.284382303 +0000 UTC m=+0.479027702 container remove 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:02:15 np0005589310 systemd[1]: libpod-conmon-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope: Deactivated successfully.
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'snap_schedule'
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'stats'
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'status'
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'telegraf'
Jan 20 14:02:15 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'telemetry'
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'volumes'
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: ms_deliver_dispatch: unhandled message 0x5595805a9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.meyjbf
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr handle_mgr_map Activating!
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr handle_mgr_map I am now activating
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.meyjbf(active, starting, since 0.0115033s)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: balancer
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: crash
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer INFO root] Starting
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Manager daemon compute-0.meyjbf is now available
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:02:16
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: devicehealth
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] Starting
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: iostat
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: nfs
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: orchestrator
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: pg_autoscaler
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: progress
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [progress INFO root] Loading...
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [progress INFO root] No stored events to load
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [progress INFO root] Loaded [] historic events
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] recovery thread starting
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] starting setup
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: rbd_support
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: status
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: telemetry
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] PerfHandler: starting
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TaskHandler: starting
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] setup complete
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: Activating manager daemon compute-0.meyjbf
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: Manager daemon compute-0.meyjbf is now available
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 14:02:16 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: volumes
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 20 14:02:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.376852318 +0000 UTC m=+0.071196536 container create fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:02:17 np0005589310 systemd[1]: Started libpod-conmon-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope.
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.328815644 +0000 UTC m=+0.023159862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.442874482 +0000 UTC m=+0.137218720 container init fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.447650765 +0000 UTC m=+0.141994973 container start fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.451730543 +0000 UTC m=+0.146074791 container attach fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:02:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 14:02:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/895259608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]: 
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]: {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "health": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "status": "HEALTH_OK",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "checks": {},
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "mutes": []
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "election_epoch": 5,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "quorum": [
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        0
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    ],
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "quorum_names": [
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "compute-0"
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    ],
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "quorum_age": 10,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "monmap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "epoch": 1,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "min_mon_release_name": "tentacle",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_mons": 1
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "osdmap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "epoch": 1,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_osds": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_up_osds": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "osd_up_since": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_in_osds": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "osd_in_since": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_remapped_pgs": 0
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "pgmap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "pgs_by_state": [],
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_pgs": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_pools": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_objects": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "data_bytes": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "bytes_used": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "bytes_avail": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "bytes_total": 0
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "fsmap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "epoch": 1,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "by_rank": [],
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "up:standby": 0
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "mgrmap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "available": false,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "num_standbys": 0,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "modules": [
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:            "iostat",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:            "nfs"
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        ],
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "services": {}
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "servicemap": {
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "epoch": 1,
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:        "services": {}
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    },
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]:    "progress_events": {}
Jan 20 14:02:17 np0005589310 youthful_williamson[75709]: }
Jan 20 14:02:17 np0005589310 systemd[1]: libpod-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope: Deactivated successfully.
Jan 20 14:02:17 np0005589310 podman[75693]: 2026-01-20 19:02:17.930965948 +0000 UTC m=+0.625310166 container died fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:02:18 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.meyjbf(active, since 1.48832s)
Jan 20 14:02:18 np0005589310 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:18 np0005589310 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:18 np0005589310 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:18 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629-merged.mount: Deactivated successfully.
Jan 20 14:02:18 np0005589310 podman[75693]: 2026-01-20 19:02:18.140830418 +0000 UTC m=+0.835174636 container remove fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:18 np0005589310 systemd[1]: libpod-conmon-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope: Deactivated successfully.
Jan 20 14:02:18 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:18 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:19 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.meyjbf(active, since 2s)
Jan 20 14:02:20 np0005589310 podman[75748]: 2026-01-20 19:02:20.187580546 +0000 UTC m=+0.023236609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:20 np0005589310 podman[75748]: 2026-01-20 19:02:20.416136704 +0000 UTC m=+0.251792757 container create 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:02:20 np0005589310 systemd[1]: Started libpod-conmon-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope.
Jan 20 14:02:20 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:20 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:20 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:20 np0005589310 podman[75748]: 2026-01-20 19:02:20.663559248 +0000 UTC m=+0.499215321 container init 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:02:20 np0005589310 podman[75748]: 2026-01-20 19:02:20.669068293 +0000 UTC m=+0.504724346 container start 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:20 np0005589310 podman[75748]: 2026-01-20 19:02:20.673003373 +0000 UTC m=+0.508659456 container attach 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 14:02:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24894714' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]: 
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]: {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "health": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "status": "HEALTH_OK",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "checks": {},
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "mutes": []
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "election_epoch": 5,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "quorum": [
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        0
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    ],
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "quorum_names": [
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "compute-0"
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    ],
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "quorum_age": 13,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "monmap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "epoch": 1,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "min_mon_release_name": "tentacle",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_mons": 1
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "osdmap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "epoch": 1,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_osds": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_up_osds": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "osd_up_since": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_in_osds": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "osd_in_since": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_remapped_pgs": 0
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "pgmap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "pgs_by_state": [],
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_pgs": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_pools": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_objects": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "data_bytes": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "bytes_used": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "bytes_avail": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "bytes_total": 0
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "fsmap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "epoch": 1,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "by_rank": [],
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "up:standby": 0
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "mgrmap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "available": true,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "num_standbys": 0,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "modules": [
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:            "iostat",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:            "nfs"
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        ],
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "services": {}
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "servicemap": {
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "epoch": 1,
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:        "services": {}
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    },
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]:    "progress_events": {}
Jan 20 14:02:21 np0005589310 reverent_wiles[75765]: }
Jan 20 14:02:21 np0005589310 systemd[1]: libpod-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope: Deactivated successfully.
Jan 20 14:02:21 np0005589310 podman[75748]: 2026-01-20 19:02:21.176456735 +0000 UTC m=+1.012112798 container died 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:02:21 np0005589310 systemd[1]: var-lib-containers-storage-overlay-437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec-merged.mount: Deactivated successfully.
Jan 20 14:02:21 np0005589310 podman[75748]: 2026-01-20 19:02:21.348077187 +0000 UTC m=+1.183733240 container remove 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:21 np0005589310 systemd[1]: libpod-conmon-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope: Deactivated successfully.
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.410452655 +0000 UTC m=+0.042885692 container create eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:02:21 np0005589310 systemd[1]: Started libpod-conmon-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope.
Jan 20 14:02:21 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.472153662 +0000 UTC m=+0.104586709 container init eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.476559124 +0000 UTC m=+0.108992151 container start eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.479598 +0000 UTC m=+0.112031027 container attach eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.389660146 +0000 UTC m=+0.022093233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 14:02:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/612880660' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:02:21 np0005589310 elastic_carver[75821]: 
Jan 20 14:02:21 np0005589310 elastic_carver[75821]: [global]
Jan 20 14:02:21 np0005589310 elastic_carver[75821]: #011fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:02:21 np0005589310 elastic_carver[75821]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 14:02:21 np0005589310 elastic_carver[75821]: #011osd_crush_chooseleaf_type = 0
Jan 20 14:02:21 np0005589310 systemd[1]: libpod-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope: Deactivated successfully.
Jan 20 14:02:21 np0005589310 podman[75805]: 2026-01-20 19:02:21.999610439 +0000 UTC m=+0.632043466 container died eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:02:22 np0005589310 systemd[1]: var-lib-containers-storage-overlay-cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873-merged.mount: Deactivated successfully.
Jan 20 14:02:22 np0005589310 podman[75805]: 2026-01-20 19:02:22.312844648 +0000 UTC m=+0.945277675 container remove eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:02:22 np0005589310 systemd[1]: libpod-conmon-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope: Deactivated successfully.
Jan 20 14:02:22 np0005589310 podman[75860]: 2026-01-20 19:02:22.383113736 +0000 UTC m=+0.048896852 container create b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:22 np0005589310 systemd[1]: Started libpod-conmon-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope.
Jan 20 14:02:22 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/612880660' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:02:22 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:22 np0005589310 podman[75860]: 2026-01-20 19:02:22.445212981 +0000 UTC m=+0.110996117 container init b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:22 np0005589310 podman[75860]: 2026-01-20 19:02:22.450523006 +0000 UTC m=+0.116306122 container start b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:02:22 np0005589310 podman[75860]: 2026-01-20 19:02:22.361072276 +0000 UTC m=+0.026855442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:22 np0005589310 podman[75860]: 2026-01-20 19:02:22.45412615 +0000 UTC m=+0.119909286 container attach b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:22 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:22 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 20 14:02:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 20 14:02:23 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 20 14:02:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  1: '-n'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  2: 'mgr.compute-0.meyjbf'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  3: '-f'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  4: '--setuser'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  5: 'ceph'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  6: '--setgroup'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  7: 'ceph'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr respawn  exe_path /proc/self/exe
Jan 20 14:02:23 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.meyjbf(active, since 6s)
Jan 20 14:02:23 np0005589310 systemd[1]: libpod-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope: Deactivated successfully.
Jan 20 14:02:23 np0005589310 podman[75860]: 2026-01-20 19:02:23.57885192 +0000 UTC m=+1.244635046 container died b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:02:23 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: ignoring --setuser ceph since I am not root
Jan 20 14:02:23 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: ignoring --setgroup ceph since I am not root
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: pidfile_write: ignore empty --pid-file
Jan 20 14:02:23 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8-merged.mount: Deactivated successfully.
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'alerts'
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'balancer'
Jan 20 14:02:23 np0005589310 podman[75860]: 2026-01-20 19:02:23.797706021 +0000 UTC m=+1.463489137 container remove b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:02:23 np0005589310 systemd[1]: libpod-conmon-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope: Deactivated successfully.
Jan 20 14:02:23 np0005589310 podman[75935]: 2026-01-20 19:02:23.863726946 +0000 UTC m=+0.043324094 container create 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:23 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'cephadm'
Jan 20 14:02:24 np0005589310 podman[75935]: 2026-01-20 19:02:23.845683319 +0000 UTC m=+0.025280497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:25 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 14:02:25 np0005589310 systemd[1]: Started libpod-conmon-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope.
Jan 20 14:02:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:25 np0005589310 podman[75935]: 2026-01-20 19:02:25.171077566 +0000 UTC m=+1.350674734 container init 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:02:25 np0005589310 podman[75935]: 2026-01-20 19:02:25.176394912 +0000 UTC m=+1.355992060 container start 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:25 np0005589310 podman[75935]: 2026-01-20 19:02:25.179635387 +0000 UTC m=+1.359232565 container attach 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:02:25 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'crash'
Jan 20 14:02:25 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'dashboard'
Jan 20 14:02:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 14:02:25 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314750318' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]: {
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]:    "epoch": 5,
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]:    "available": true,
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]:    "active_name": "compute-0.meyjbf",
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]:    "num_standby": 0
Jan 20 14:02:25 np0005589310 relaxed_robinson[75962]: }
Jan 20 14:02:25 np0005589310 systemd[1]: libpod-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope: Deactivated successfully.
Jan 20 14:02:25 np0005589310 podman[75988]: 2026-01-20 19:02:25.695625864 +0000 UTC m=+0.024605254 container died 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 14:02:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0-merged.mount: Deactivated successfully.
Jan 20 14:02:25 np0005589310 podman[75988]: 2026-01-20 19:02:25.831920886 +0000 UTC m=+0.160900266 container remove 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:25 np0005589310 systemd[1]: libpod-conmon-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope: Deactivated successfully.
Jan 20 14:02:25 np0005589310 podman[76003]: 2026-01-20 19:02:25.87818483 +0000 UTC m=+0.023323021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:26 np0005589310 podman[76003]: 2026-01-20 19:02:26.179478835 +0000 UTC m=+0.324617026 container create 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'devicehealth'
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 14:02:26 np0005589310 systemd[1]: Started libpod-conmon-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope.
Jan 20 14:02:26 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:26 np0005589310 podman[76003]: 2026-01-20 19:02:26.314471885 +0000 UTC m=+0.459610086 container init 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:26 np0005589310 podman[76003]: 2026-01-20 19:02:26.319381111 +0000 UTC m=+0.464519282 container start 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:26 np0005589310 podman[76003]: 2026-01-20 19:02:26.32498443 +0000 UTC m=+0.470122641 container attach 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:26 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 14:02:26 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 14:02:26 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]:  from numpy import show_config as show_numpy_config
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'influx'
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'insights'
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'iostat'
Jan 20 14:02:26 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'k8sevents'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'localpool'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'mirroring'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'nfs'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'orchestrator'
Jan 20 14:02:27 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'osd_support'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'progress'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'prometheus'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rbd_support'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rgw'
Jan 20 14:02:28 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'rook'
Jan 20 14:02:29 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'selftest'
Jan 20 14:02:29 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'smb'
Jan 20 14:02:29 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'snap_schedule'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'stats'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'status'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'telegraf'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'telemetry'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: mgr[py] Loading python module 'volumes'
Jan 20 14:02:30 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Active manager daemon compute-0.meyjbf restarted
Jan 20 14:02:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 20 14:02:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:02:30 np0005589310 ceph-mgr[75417]: ms_deliver_dispatch: unhandled message 0x558c53b6a000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 14:02:30 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.meyjbf
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: mgr handle_mgr_map Activating!
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.meyjbf(active, starting, since 0.526527s)
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: mgr handle_mgr_map I am now activating
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} v 0)
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: balancer
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Starting
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:02:31
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:02:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Manager daemon compute-0.meyjbf is now available
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: Active manager daemon compute-0.meyjbf restarted
Jan 20 14:02:31 np0005589310 ceph-mon[75120]: Activating manager daemon compute-0.meyjbf
Jan 20 14:02:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019908960 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 20 14:02:33 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.meyjbf(active, since 3s)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 14:02:34 np0005589310 interesting_engelbart[76019]: {
Jan 20 14:02:34 np0005589310 interesting_engelbart[76019]:    "mgrmap_epoch": 7,
Jan 20 14:02:34 np0005589310 interesting_engelbart[76019]:    "initialized": true
Jan 20 14:02:34 np0005589310 interesting_engelbart[76019]: }
Jan 20 14:02:34 np0005589310 systemd[1]: libpod-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope: Deactivated successfully.
Jan 20 14:02:34 np0005589310 podman[76003]: 2026-01-20 19:02:34.042710178 +0000 UTC m=+8.187848409 container died 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: Manager daemon compute-0.meyjbf is now available
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:34 np0005589310 systemd[1]: var-lib-containers-storage-overlay-67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1-merged.mount: Deactivated successfully.
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: cephadm
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: crash
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: devicehealth
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] Starting
Jan 20 14:02:34 np0005589310 podman[76003]: 2026-01-20 19:02:34.431997763 +0000 UTC m=+8.577135934 container remove 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: iostat
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: nfs
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: orchestrator
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: pg_autoscaler
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: progress
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [progress INFO root] Loading...
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [progress INFO root] No stored events to load
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [progress INFO root] Loaded [] historic events
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 14:02:34 np0005589310 systemd[1]: libpod-conmon-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope: Deactivated successfully.
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] recovery thread starting
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] starting setup
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: rbd_support
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: status
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: telemetry
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] PerfHandler: starting
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TaskHandler: starting
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} v 0)
Jan 20 14:02:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] setup complete
Jan 20 14:02:34 np0005589310 ceph-mgr[75417]: mgr load Constructed class from module: volumes
Jan 20 14:02:34 np0005589310 podman[76119]: 2026-01-20 19:02:34.490690715 +0000 UTC m=+0.040097418 container create 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:34 np0005589310 systemd[1]: Started libpod-conmon-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope.
Jan 20 14:02:34 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:34 np0005589310 podman[76119]: 2026-01-20 19:02:34.473501299 +0000 UTC m=+0.022908032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:34 np0005589310 podman[76119]: 2026-01-20 19:02:34.572604803 +0000 UTC m=+0.122011516 container init 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:34 np0005589310 podman[76119]: 2026-01-20 19:02:34.58023255 +0000 UTC m=+0.129639253 container start 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:34 np0005589310 podman[76119]: 2026-01-20 19:02:34.583411183 +0000 UTC m=+0.132817886 container attach 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 20 14:02:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 14:02:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:02:36 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: Found migration_current of "None". Setting to last migration.
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 20 14:02:36 np0005589310 hopeful_aryabhata[76186]: module 'orchestrator' is already enabled (always-on)
Jan 20 14:02:36 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.meyjbf(active, since 5s)
Jan 20 14:02:36 np0005589310 systemd[1]: libpod-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope: Deactivated successfully.
Jan 20 14:02:36 np0005589310 podman[76119]: 2026-01-20 19:02:36.757805926 +0000 UTC m=+2.307212629 container died 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:36 np0005589310 systemd[1]: var-lib-containers-storage-overlay-003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49-merged.mount: Deactivated successfully.
Jan 20 14:02:36 np0005589310 podman[76119]: 2026-01-20 19:02:36.816287558 +0000 UTC m=+2.365694261 container remove 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:36 np0005589310 systemd[1]: libpod-conmon-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope: Deactivated successfully.
Jan 20 14:02:36 np0005589310 podman[76244]: 2026-01-20 19:02:36.873494658 +0000 UTC m=+0.037637230 container create cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:02:36 np0005589310 systemd[1]: Started libpod-conmon-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope.
Jan 20 14:02:36 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:36 np0005589310 podman[76244]: 2026-01-20 19:02:36.928872881 +0000 UTC m=+0.093015483 container init cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:36 np0005589310 podman[76244]: 2026-01-20 19:02:36.933373667 +0000 UTC m=+0.097516239 container start cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:02:36 np0005589310 podman[76244]: 2026-01-20 19:02:36.936631323 +0000 UTC m=+0.100773915 container attach cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:02:36 np0005589310 podman[76244]: 2026-01-20 19:02:36.85564216 +0000 UTC m=+0.019784752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052667 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:37 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 20 14:02:37 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:02:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:02:37 np0005589310 systemd[1]: libpod-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope: Deactivated successfully.
Jan 20 14:02:37 np0005589310 podman[76244]: 2026-01-20 19:02:37.816154746 +0000 UTC m=+0.980297328 container died cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:02:37 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d-merged.mount: Deactivated successfully.
Jan 20 14:02:37 np0005589310 podman[76244]: 2026-01-20 19:02:37.905955414 +0000 UTC m=+1.070097986 container remove cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:37 np0005589310 systemd[1]: libpod-conmon-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope: Deactivated successfully.
Jan 20 14:02:37 np0005589310 podman[76300]: 2026-01-20 19:02:37.969694248 +0000 UTC m=+0.043239080 container create 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:38 np0005589310 systemd[1]: Started libpod-conmon-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope.
Jan 20 14:02:38 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:38 np0005589310 podman[76300]: 2026-01-20 19:02:37.950105586 +0000 UTC m=+0.023650468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:38 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:38 np0005589310 podman[76300]: 2026-01-20 19:02:38.542335017 +0000 UTC m=+0.615879899 container init 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:02:38 np0005589310 podman[76300]: 2026-01-20 19:02:38.552741047 +0000 UTC m=+0.626285879 container start 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:02:38 np0005589310 podman[76300]: 2026-01-20 19:02:38.557738168 +0000 UTC m=+0.631283050 container attach 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 20 14:02:38 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 20 14:02:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_user
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 20 14:02:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 20 14:02:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_config
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 20 14:02:39 np0005589310 jovial_curran[76316]: ssh user set to ceph-admin. sudo will be used
Jan 20 14:02:39 np0005589310 systemd[1]: libpod-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope: Deactivated successfully.
Jan 20 14:02:39 np0005589310 podman[76300]: 2026-01-20 19:02:39.053959503 +0000 UTC m=+1.127504335 container died 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:39 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e-merged.mount: Deactivated successfully.
Jan 20 14:02:39 np0005589310 podman[76300]: 2026-01-20 19:02:39.094628788 +0000 UTC m=+1.168173620 container remove 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:39 np0005589310 systemd[1]: libpod-conmon-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope: Deactivated successfully.
Jan 20 14:02:39 np0005589310 podman[76355]: 2026-01-20 19:02:39.139084856 +0000 UTC m=+0.026153098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:39 np0005589310 podman[76355]: 2026-01-20 19:02:39.461624482 +0000 UTC m=+0.348692694 container create fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:39 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:39 np0005589310 systemd[1]: Started libpod-conmon-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope.
Jan 20 14:02:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 podman[76355]: 2026-01-20 19:02:40.217850758 +0000 UTC m=+1.104919060 container init fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: Set ssh ssh_user
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: Set ssh ssh_config
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: ssh user set to ceph-admin. sudo will be used
Jan 20 14:02:40 np0005589310 podman[76355]: 2026-01-20 19:02:40.230100796 +0000 UTC m=+1.117169048 container start fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:02:40 np0005589310 podman[76355]: 2026-01-20 19:02:40.248623187 +0000 UTC m=+1.135691469 container attach fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 20 14:02:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Set ssh private key
Jan 20 14:02:40 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 20 14:02:40 np0005589310 systemd[1]: libpod-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope: Deactivated successfully.
Jan 20 14:02:40 np0005589310 podman[76355]: 2026-01-20 19:02:40.675836395 +0000 UTC m=+1.562904607 container died fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:40 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d-merged.mount: Deactivated successfully.
Jan 20 14:02:40 np0005589310 podman[76355]: 2026-01-20 19:02:40.719523165 +0000 UTC m=+1.606591377 container remove fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 20 14:02:40 np0005589310 systemd[1]: libpod-conmon-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope: Deactivated successfully.
Jan 20 14:02:40 np0005589310 podman[76409]: 2026-01-20 19:02:40.774847304 +0000 UTC m=+0.037029770 container create b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:40 np0005589310 systemd[1]: Started libpod-conmon-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope.
Jan 20 14:02:40 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:40 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:40 np0005589310 podman[76409]: 2026-01-20 19:02:40.847100318 +0000 UTC m=+0.109282794 container init b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:02:40 np0005589310 podman[76409]: 2026-01-20 19:02:40.852155622 +0000 UTC m=+0.114338088 container start b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:40 np0005589310 podman[76409]: 2026-01-20 19:02:40.759044815 +0000 UTC m=+0.021227301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:40 np0005589310 podman[76409]: 2026-01-20 19:02:40.85587846 +0000 UTC m=+0.118060936 container attach b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:02:41 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:41 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 20 14:02:41 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 20 14:02:41 np0005589310 systemd[1]: libpod-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76409]: 2026-01-20 19:02:41.261839967 +0000 UTC m=+0.524022453 container died b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:02:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de-merged.mount: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76409]: 2026-01-20 19:02:41.295413331 +0000 UTC m=+0.557595797 container remove b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:41 np0005589310 systemd[1]: libpod-conmon-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.3629888 +0000 UTC m=+0.049918231 container create b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:02:41 np0005589310 systemd[1]: Started libpod-conmon-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope.
Jan 20 14:02:41 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.428988303 +0000 UTC m=+0.115917744 container init b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.336057965 +0000 UTC m=+0.022987486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.433411946 +0000 UTC m=+0.120341387 container start b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.438024357 +0000 UTC m=+0.124953798 container attach b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:02:41 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: Set ssh ssh_identity_key
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: Set ssh private key
Jan 20 14:02:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:41 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:41 np0005589310 vigorous_banach[76480]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRdjTUKzSdmKvb3VwE6HE/qbW6hBZuJGSGDa7vwZvuK+uZHe8W/4BziBmg9gcZ6u6FDNHkIMinQJNsQBSP2Ak5KZdiDPCHcM6W7/ajdmqThMfxESSt/3LoU0t7kmc/lAU7NXy70cc05z46Oe9LtwVu+tM8CDfI3vKJHrr5jaHgmTiHQSMuWuPz2ERtV8lTVZyy3CTKXmg/fWNfbcr7T8Gtbkkx/pzgjxy5loaPKzQWZXjVg+Jvxcpyl2uL6a7k/xmW3uRoKLCujuI1GPj4sbFGShG3DT8vVNhqla0rmF6/ltXz9fFMUoVfpoCdQqdeMrBi2JTyITWTqiH2HZETIUygLFC1VJZUfEIEFpSQFpCMNA8kFH2qzJkxd2ynLCUwWGCEVK//8ye5jmGGtOwSrP2ABF0V8zuwA4Qv56RT0uKq4cy0tTIPNrF9q/t0TbAg6bkg/ziEkFc49CYuzPg0MYpWGiIp1RuH4DpLgPrby4mpruxKDOTqe7BLGnES0JT5VJE= zuul@controller
Jan 20 14:02:41 np0005589310 systemd[1]: libpod-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.827561744 +0000 UTC m=+0.514491225 container died b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387-merged.mount: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76464]: 2026-01-20 19:02:41.872052483 +0000 UTC m=+0.558981934 container remove b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:41 np0005589310 systemd[1]: libpod-conmon-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope: Deactivated successfully.
Jan 20 14:02:41 np0005589310 podman[76518]: 2026-01-20 19:02:41.929171159 +0000 UTC m=+0.037018621 container create 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:41 np0005589310 systemd[1]: Started libpod-conmon-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope.
Jan 20 14:02:41 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:41 np0005589310 podman[76518]: 2026-01-20 19:02:41.990457385 +0000 UTC m=+0.098304877 container init 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:02:41 np0005589310 podman[76518]: 2026-01-20 19:02:41.994725081 +0000 UTC m=+0.102572543 container start 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:02:42 np0005589310 podman[76518]: 2026-01-20 19:02:42.000015394 +0000 UTC m=+0.107862866 container attach 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:02:42 np0005589310 podman[76518]: 2026-01-20 19:02:41.912264366 +0000 UTC m=+0.020111848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054702 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:42 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:42 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:42 np0005589310 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 14:02:42 np0005589310 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 14:02:42 np0005589310 systemd-logind[797]: New session 21 of user ceph-admin.
Jan 20 14:02:42 np0005589310 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 14:02:42 np0005589310 systemd[1]: Starting User Manager for UID 42477...
Jan 20 14:02:42 np0005589310 ceph-mon[75120]: Set ssh ssh_identity_pub
Jan 20 14:02:42 np0005589310 systemd[76564]: Queued start job for default target Main User Target.
Jan 20 14:02:42 np0005589310 systemd[76564]: Created slice User Application Slice.
Jan 20 14:02:42 np0005589310 systemd[76564]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 14:02:42 np0005589310 systemd[76564]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:02:42 np0005589310 systemd[76564]: Reached target Paths.
Jan 20 14:02:42 np0005589310 systemd[76564]: Reached target Timers.
Jan 20 14:02:42 np0005589310 systemd[76564]: Starting D-Bus User Message Bus Socket...
Jan 20 14:02:42 np0005589310 systemd[76564]: Starting Create User's Volatile Files and Directories...
Jan 20 14:02:42 np0005589310 systemd[76564]: Finished Create User's Volatile Files and Directories.
Jan 20 14:02:42 np0005589310 systemd[76564]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:02:42 np0005589310 systemd[76564]: Reached target Sockets.
Jan 20 14:02:42 np0005589310 systemd[76564]: Reached target Basic System.
Jan 20 14:02:42 np0005589310 systemd[76564]: Reached target Main User Target.
Jan 20 14:02:42 np0005589310 systemd[76564]: Startup finished in 127ms.
Jan 20 14:02:42 np0005589310 systemd[1]: Started User Manager for UID 42477.
Jan 20 14:02:42 np0005589310 systemd[1]: Started Session 21 of User ceph-admin.
Jan 20 14:02:42 np0005589310 systemd-logind[797]: New session 23 of user ceph-admin.
Jan 20 14:02:42 np0005589310 systemd[1]: Started Session 23 of User ceph-admin.
Jan 20 14:02:43 np0005589310 systemd-logind[797]: New session 24 of user ceph-admin.
Jan 20 14:02:43 np0005589310 systemd[1]: Started Session 24 of User ceph-admin.
Jan 20 14:02:43 np0005589310 systemd-logind[797]: New session 25 of user ceph-admin.
Jan 20 14:02:43 np0005589310 systemd[1]: Started Session 25 of User ceph-admin.
Jan 20 14:02:43 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:43 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 20 14:02:43 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 20 14:02:43 np0005589310 ceph-mon[75120]: Deploying cephadm binary to compute-0
Jan 20 14:02:43 np0005589310 systemd-logind[797]: New session 26 of user ceph-admin.
Jan 20 14:02:43 np0005589310 systemd[1]: Started Session 26 of User ceph-admin.
Jan 20 14:02:44 np0005589310 systemd-logind[797]: New session 27 of user ceph-admin.
Jan 20 14:02:44 np0005589310 systemd[1]: Started Session 27 of User ceph-admin.
Jan 20 14:02:44 np0005589310 systemd-logind[797]: New session 28 of user ceph-admin.
Jan 20 14:02:44 np0005589310 systemd[1]: Started Session 28 of User ceph-admin.
Jan 20 14:02:44 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:44 np0005589310 systemd-logind[797]: New session 29 of user ceph-admin.
Jan 20 14:02:44 np0005589310 systemd[1]: Started Session 29 of User ceph-admin.
Jan 20 14:02:45 np0005589310 systemd-logind[797]: New session 30 of user ceph-admin.
Jan 20 14:02:45 np0005589310 systemd[1]: Started Session 30 of User ceph-admin.
Jan 20 14:02:45 np0005589310 systemd-logind[797]: New session 31 of user ceph-admin.
Jan 20 14:02:45 np0005589310 systemd[1]: Started Session 31 of User ceph-admin.
Jan 20 14:02:45 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:46 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:46 np0005589310 systemd-logind[797]: New session 32 of user ceph-admin.
Jan 20 14:02:46 np0005589310 systemd[1]: Started Session 32 of User ceph-admin.
Jan 20 14:02:47 np0005589310 systemd-logind[797]: New session 33 of user ceph-admin.
Jan 20 14:02:47 np0005589310 systemd[1]: Started Session 33 of User ceph-admin.
Jan 20 14:02:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:47 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:02:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:47 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Added host compute-0
Jan 20 14:02:47 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 14:02:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:02:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:02:47 np0005589310 objective_pascal[76534]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 14:02:47 np0005589310 systemd[1]: libpod-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope: Deactivated successfully.
Jan 20 14:02:47 np0005589310 podman[76518]: 2026-01-20 19:02:47.601566808 +0000 UTC m=+5.709414280 container died 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:02:47 np0005589310 systemd[1]: var-lib-containers-storage-overlay-45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677-merged.mount: Deactivated successfully.
Jan 20 14:02:47 np0005589310 podman[76518]: 2026-01-20 19:02:47.640273409 +0000 UTC m=+5.748120881 container remove 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:47 np0005589310 systemd[1]: libpod-conmon-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope: Deactivated successfully.
Jan 20 14:02:47 np0005589310 podman[76956]: 2026-01-20 19:02:47.71309927 +0000 UTC m=+0.047177239 container create 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:47 np0005589310 systemd[1]: Started libpod-conmon-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope.
Jan 20 14:02:47 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:47 np0005589310 podman[76956]: 2026-01-20 19:02:47.692598074 +0000 UTC m=+0.026676073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:47 np0005589310 podman[76956]: 2026-01-20 19:02:47.814947306 +0000 UTC m=+0.149025285 container init 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:02:47 np0005589310 podman[76956]: 2026-01-20 19:02:47.823233574 +0000 UTC m=+0.157311533 container start 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:02:47 np0005589310 podman[76956]: 2026-01-20 19:02:47.826690231 +0000 UTC m=+0.160768350 container attach 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:48 np0005589310 hopeful_mahavira[76997]: Scheduled mon update...
Jan 20 14:02:48 np0005589310 systemd[1]: libpod-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[76956]: 2026-01-20 19:02:48.278650969 +0000 UTC m=+0.612728948 container died 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:02:48 np0005589310 systemd[1]: var-lib-containers-storage-overlay-20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50-merged.mount: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[76956]: 2026-01-20 19:02:48.32213869 +0000 UTC m=+0.656216659 container remove 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:02:48 np0005589310 systemd[1]: libpod-conmon-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.377520702 +0000 UTC m=+0.036133178 container create 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:02:48 np0005589310 systemd[1]: Started libpod-conmon-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope.
Jan 20 14:02:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.453872954 +0000 UTC m=+0.112485460 container init 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.361164877 +0000 UTC m=+0.019777363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.460553255 +0000 UTC m=+0.119165741 container start 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.464539556 +0000 UTC m=+0.123152072 container attach 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:02:48 np0005589310 podman[77033]: 2026-01-20 19:02:48.555026336 +0000 UTC m=+0.583051501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: Added host compute-0
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.687942677 +0000 UTC m=+0.055455958 container create 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:48 np0005589310 systemd[1]: Started libpod-conmon-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope.
Jan 20 14:02:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.660194392 +0000 UTC m=+0.027707703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.763645716 +0000 UTC m=+0.131158997 container init 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.768877158 +0000 UTC m=+0.136390429 container start 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.772111143 +0000 UTC m=+0.139624504 container attach 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:02:48 np0005589310 bold_easley[77128]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 20 14:02:48 np0005589310 systemd[1]: libpod-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.866312891 +0000 UTC m=+0.233826172 container died 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:02:48 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d2464ef4ed56d044c3cf15dae47639aaf697be1ed46199c6e7c95b4b1d52c49b-merged.mount: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[77112]: 2026-01-20 19:02:48.902411517 +0000 UTC m=+0.269924798 container remove 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:02:48 np0005589310 systemd[1]: libpod-conmon-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope: Deactivated successfully.
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 20 14:02:48 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:48 np0005589310 kind_aryabhata[77075]: Scheduled mgr update...
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 20 14:02:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:48 np0005589310 systemd[1]: libpod-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope: Deactivated successfully.
Jan 20 14:02:48 np0005589310 podman[77059]: 2026-01-20 19:02:48.98360764 +0000 UTC m=+0.642220136 container died 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:02:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f-merged.mount: Deactivated successfully.
Jan 20 14:02:49 np0005589310 podman[77059]: 2026-01-20 19:02:49.030962917 +0000 UTC m=+0.689575423 container remove 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:49 np0005589310 systemd[1]: libpod-conmon-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope: Deactivated successfully.
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.105351853 +0000 UTC m=+0.052250873 container create 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:49 np0005589310 systemd[1]: Started libpod-conmon-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope.
Jan 20 14:02:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.082516546 +0000 UTC m=+0.029415586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.199250007 +0000 UTC m=+0.146149027 container init 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.205189272 +0000 UTC m=+0.152088292 container start 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.223534374 +0000 UTC m=+0.170433534 container attach 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:49 np0005589310 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: Saving service mon spec with placement count:5
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:49 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:49 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service crash spec with placement *
Jan 20 14:02:49 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 14:02:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:49 np0005589310 happy_almeida[77224]: Scheduled crash update...
Jan 20 14:02:49 np0005589310 systemd[1]: libpod-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope: Deactivated successfully.
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.653078665 +0000 UTC m=+0.599977685 container died 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2-merged.mount: Deactivated successfully.
Jan 20 14:02:49 np0005589310 podman[77181]: 2026-01-20 19:02:49.695254922 +0000 UTC m=+0.642153942 container remove 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:02:49 np0005589310 systemd[1]: libpod-conmon-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope: Deactivated successfully.
Jan 20 14:02:49 np0005589310 podman[77346]: 2026-01-20 19:02:49.753283232 +0000 UTC m=+0.038444609 container create 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:49 np0005589310 systemd[1]: Started libpod-conmon-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope.
Jan 20 14:02:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:49 np0005589310 podman[77346]: 2026-01-20 19:02:49.82062698 +0000 UTC m=+0.105788377 container init 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:49 np0005589310 podman[77346]: 2026-01-20 19:02:49.8266568 +0000 UTC m=+0.111818227 container start 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:49 np0005589310 podman[77346]: 2026-01-20 19:02:49.735332299 +0000 UTC m=+0.020493696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:49 np0005589310 podman[77346]: 2026-01-20 19:02:49.831347806 +0000 UTC m=+0.116509203 container attach 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:02:49 np0005589310 podman[77398]: 2026-01-20 19:02:49.886609102 +0000 UTC m=+0.046433503 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:49 np0005589310 podman[77398]: 2026-01-20 19:02:49.980658933 +0000 UTC m=+0.140483314 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1711333958' entity='client.admin' 
Jan 20 14:02:50 np0005589310 systemd[1]: libpod-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope: Deactivated successfully.
Jan 20 14:02:50 np0005589310 podman[77346]: 2026-01-20 19:02:50.244254425 +0000 UTC m=+0.529415802 container died 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c-merged.mount: Deactivated successfully.
Jan 20 14:02:50 np0005589310 podman[77346]: 2026-01-20 19:02:50.276122997 +0000 UTC m=+0.561284374 container remove 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 14:02:50 np0005589310 systemd[1]: libpod-conmon-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope: Deactivated successfully.
Jan 20 14:02:50 np0005589310 podman[77555]: 2026-01-20 19:02:50.33629512 +0000 UTC m=+0.038248079 container create 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:50 np0005589310 systemd[1]: Started libpod-conmon-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope.
Jan 20 14:02:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:50 np0005589310 podman[77555]: 2026-01-20 19:02:50.404377003 +0000 UTC m=+0.106329972 container init 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:02:50 np0005589310 podman[77555]: 2026-01-20 19:02:50.41012313 +0000 UTC m=+0.112076079 container start 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:02:50 np0005589310 podman[77555]: 2026-01-20 19:02:50.41408522 +0000 UTC m=+0.116038169 container attach 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:50 np0005589310 podman[77555]: 2026-01-20 19:02:50.320767514 +0000 UTC m=+0.022720483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:50 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:50 np0005589310 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77597 (sysctl)
Jan 20 14:02:50 np0005589310 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 20 14:02:50 np0005589310 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: Saving service mgr spec with placement count:2
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: Saving service crash spec with placement *
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1711333958' entity='client.admin' 
Jan 20 14:02:50 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 20 14:02:50 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:50 np0005589310 systemd[1]: libpod-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope: Deactivated successfully.
Jan 20 14:02:50 np0005589310 podman[77641]: 2026-01-20 19:02:50.91267162 +0000 UTC m=+0.020523228 container died 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:02:51 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d-merged.mount: Deactivated successfully.
Jan 20 14:02:51 np0005589310 podman[77641]: 2026-01-20 19:02:51.058458438 +0000 UTC m=+0.166310066 container remove 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:51 np0005589310 systemd[1]: libpod-conmon-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope: Deactivated successfully.
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.179931609 +0000 UTC m=+0.095421599 container create f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.108285934 +0000 UTC m=+0.023775934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:51 np0005589310 systemd[1]: Started libpod-conmon-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope.
Jan 20 14:02:51 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.350711379 +0000 UTC m=+0.266201389 container init f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.361172111 +0000 UTC m=+0.276662101 container start f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.367394901 +0000 UTC m=+0.282884891 container attach f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:51 np0005589310 ceph-mgr[75417]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 20 14:02:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.743777825 +0000 UTC m=+0.038372626 container create b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:51 np0005589310 systemd[1]: Started libpod-conmon-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope.
Jan 20 14:02:51 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:02:51 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.81687721 +0000 UTC m=+0.111472031 container init b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:51 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Added label _admin to host compute-0
Jan 20 14:02:51 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 20 14:02:51 np0005589310 dreamy_panini[77728]: Added label _admin to host compute-0
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.725939658 +0000 UTC m=+0.020534469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.823615393 +0000 UTC m=+0.118210194 container start b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:51 np0005589310 condescending_elgamal[77838]: 167 167
Jan 20 14:02:51 np0005589310 systemd[1]: libpod-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope: Deactivated successfully.
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.827560603 +0000 UTC m=+0.122155434 container attach b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:51 np0005589310 podman[77821]: 2026-01-20 19:02:51.827832656 +0000 UTC m=+0.122427457 container died b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:51 np0005589310 systemd[1]: libpod-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope: Deactivated successfully.
Jan 20 14:02:51 np0005589310 podman[77702]: 2026-01-20 19:02:51.840841512 +0000 UTC m=+0.756331502 container died f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 14:02:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:51 np0005589310 systemd[1]: var-lib-containers-storage-overlay-932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d-merged.mount: Deactivated successfully.
Jan 20 14:02:52 np0005589310 podman[77702]: 2026-01-20 19:02:52.015793062 +0000 UTC m=+0.931283072 container remove f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:52 np0005589310 systemd[1]: libpod-conmon-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope: Deactivated successfully.
Jan 20 14:02:52 np0005589310 systemd[1]: var-lib-containers-storage-overlay-df917659f7ccd4c88fc110c9cb18a3ceb1ba4728151a3ec2e2e6df43983519cc-merged.mount: Deactivated successfully.
Jan 20 14:02:52 np0005589310 podman[77821]: 2026-01-20 19:02:52.128780023 +0000 UTC m=+0.423374834 container remove b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:52 np0005589310 systemd[1]: libpod-conmon-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope: Deactivated successfully.
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.21481708 +0000 UTC m=+0.177203560 container create 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.142879311 +0000 UTC m=+0.105265811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:52 np0005589310 systemd[1]: Started libpod-conmon-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope.
Jan 20 14:02:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.404799124 +0000 UTC m=+0.367185624 container init 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.412458332 +0000 UTC m=+0.374844812 container start 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.416001083 +0000 UTC m=+0.378387593 container attach 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:52 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 20 14:02:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862612236' entity='client.admin' 
Jan 20 14:02:52 np0005589310 infallible_yalow[77888]: set mgr/dashboard/cluster/status
Jan 20 14:02:52 np0005589310 systemd[1]: libpod-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope: Deactivated successfully.
Jan 20 14:02:52 np0005589310 podman[77870]: 2026-01-20 19:02:52.968963145 +0000 UTC m=+0.931349625 container died 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:02:52 np0005589310 ceph-mon[75120]: Added label _admin to host compute-0
Jan 20 14:02:52 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3862612236' entity='client.admin' 
Jan 20 14:02:52 np0005589310 systemd[1]: var-lib-containers-storage-overlay-54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20-merged.mount: Deactivated successfully.
Jan 20 14:02:53 np0005589310 podman[77870]: 2026-01-20 19:02:53.003622812 +0000 UTC m=+0.966009292 container remove 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:02:53 np0005589310 systemd[1]: libpod-conmon-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope: Deactivated successfully.
Jan 20 14:02:53 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:53 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:53 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:02:53 np0005589310 podman[77974]: 2026-01-20 19:02:53.500693468 +0000 UTC m=+0.043859509 container create 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:02:53 np0005589310 systemd[1]: Started libpod-conmon-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope.
Jan 20 14:02:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 podman[77974]: 2026-01-20 19:02:53.482697473 +0000 UTC m=+0.025863534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:02:53 np0005589310 podman[77974]: 2026-01-20 19:02:53.585488725 +0000 UTC m=+0.128654786 container init 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:53 np0005589310 podman[77974]: 2026-01-20 19:02:53.593976963 +0000 UTC m=+0.137143004 container start 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 14:02:53 np0005589310 podman[77974]: 2026-01-20 19:02:53.597525523 +0000 UTC m=+0.140691564 container attach 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:02:53 np0005589310 python3[78020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:53 np0005589310 podman[78026]: 2026-01-20 19:02:53.905459527 +0000 UTC m=+0.043145165 container create b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:02:53 np0005589310 systemd[1]: Started libpod-conmon-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope.
Jan 20 14:02:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:53 np0005589310 podman[78026]: 2026-01-20 19:02:53.882868821 +0000 UTC m=+0.020554259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:54 np0005589310 podman[78026]: 2026-01-20 19:02:54.05152541 +0000 UTC m=+0.189210848 container init b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:02:54 np0005589310 podman[78026]: 2026-01-20 19:02:54.057652434 +0000 UTC m=+0.195337852 container start b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]: [
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:    {
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "available": false,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "being_replaced": false,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "ceph_device_lvm": false,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "lsm_data": {},
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "lvs": [],
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "path": "/dev/sr0",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "rejected_reasons": [
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "Has a FileSystem",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "Insufficient space (<5GB)"
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        ],
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        "sys_api": {
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "actuators": null,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "device_nodes": [
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:                "sr0"
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            ],
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "devname": "sr0",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "human_readable_size": "482.00 KB",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "id_bus": "ata",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "model": "QEMU DVD-ROM",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "nr_requests": "2",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "parent": "/dev/sr0",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "partitions": {},
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "path": "/dev/sr0",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "removable": "1",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "rev": "2.5+",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "ro": "0",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "rotational": "1",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "sas_address": "",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "sas_device_handle": "",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "scheduler_mode": "mq-deadline",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "sectors": 0,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "sectorsize": "2048",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "size": 493568.0,
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "support_discard": "2048",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "type": "disk",
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:            "vendor": "QEMU"
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:        }
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]:    }
Jan 20 14:02:54 np0005589310 amazing_lederberg[77990]: ]
Jan 20 14:02:54 np0005589310 podman[78026]: 2026-01-20 19:02:54.084057844 +0000 UTC m=+0.221743262 container attach b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 20 14:02:54 np0005589310 systemd[1]: libpod-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope: Deactivated successfully.
Jan 20 14:02:54 np0005589310 podman[77974]: 2026-01-20 19:02:54.108864907 +0000 UTC m=+0.652030968 container died 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:02:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d-merged.mount: Deactivated successfully.
Jan 20 14:02:54 np0005589310 podman[77974]: 2026-01-20 19:02:54.15013255 +0000 UTC m=+0.693298591 container remove 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:02:54 np0005589310 systemd[1]: libpod-conmon-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope: Deactivated successfully.
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:02:54 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 14:02:54 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 14:02:54 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 20 14:02:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3594887429' entity='client.admin' 
Jan 20 14:02:54 np0005589310 systemd[1]: libpod-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope: Deactivated successfully.
Jan 20 14:02:54 np0005589310 podman[78026]: 2026-01-20 19:02:54.493280027 +0000 UTC m=+0.630965465 container died b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468-merged.mount: Deactivated successfully.
Jan 20 14:02:54 np0005589310 podman[78026]: 2026-01-20 19:02:54.535651723 +0000 UTC m=+0.673337141 container remove b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:02:54 np0005589310 systemd[1]: libpod-conmon-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope: Deactivated successfully.
Jan 20 14:02:54 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 14:02:54 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 14:02:55 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 14:02:55 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 14:02:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:02:55 np0005589310 ansible-async_wrapper.py[79429]: Invoked with j364810835178 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935774.8982103-36472-265219026780609/AnsiballZ_command.py _
Jan 20 14:02:55 np0005589310 ansible-async_wrapper.py[79499]: Starting module and watcher
Jan 20 14:02:55 np0005589310 ansible-async_wrapper.py[79499]: Start watching 79501 (30)
Jan 20 14:02:55 np0005589310 ansible-async_wrapper.py[79501]: Start module (79501)
Jan 20 14:02:55 np0005589310 ansible-async_wrapper.py[79429]: Return async_wrapper task started.
Jan 20 14:02:55 np0005589310 python3[79505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:55 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 14:02:55 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 14:02:55 np0005589310 podman[79562]: 2026-01-20 19:02:55.693053526 +0000 UTC m=+0.023134474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:02:56 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:57 np0005589310 python3[79892]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=status _async_dir=/root/.ansible_async
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3594887429' entity='client.admin' 
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.337783016 +0000 UTC m=+1.667863964 container create aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:02:57 np0005589310 systemd[1]: Started libpod-conmon-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope.
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:57 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1))
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:02:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:02:57 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 20 14:02:57 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 20 14:02:57 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.4327322 +0000 UTC m=+1.762813148 container init aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.441480021 +0000 UTC m=+1.771560949 container start aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.445264592 +0000 UTC m=+1.775345540 container attach aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:02:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:02:57 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:02:57 np0005589310 exciting_northcutt[79895]: 
Jan 20 14:02:57 np0005589310 exciting_northcutt[79895]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.866396688 +0000 UTC m=+0.038823327 container create 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:02:57 np0005589310 systemd[1]: libpod-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope: Deactivated successfully.
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.880863305 +0000 UTC m=+2.210944233 container died aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:02:57 np0005589310 systemd[1]: Started libpod-conmon-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope.
Jan 20 14:02:57 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528-merged.mount: Deactivated successfully.
Jan 20 14:02:57 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:02:57 np0005589310 podman[79562]: 2026-01-20 19:02:57.924269941 +0000 UTC m=+2.254350869 container remove aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.935504821 +0000 UTC m=+0.107931460 container init 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:02:57 np0005589310 systemd[1]: libpod-conmon-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope: Deactivated successfully.
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.941959361 +0000 UTC m=+0.114386010 container start 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:02:57 np0005589310 ansible-async_wrapper.py[79501]: Module complete (79501)
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.847112881 +0000 UTC m=+0.019539540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.945943713 +0000 UTC m=+0.118370372 container attach 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:02:57 np0005589310 gracious_babbage[80031]: 167 167
Jan 20 14:02:57 np0005589310 systemd[1]: libpod-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope: Deactivated successfully.
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.948076526 +0000 UTC m=+0.120503165 container died 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:02:57 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4a60d99ec6243eebc1cdd59c06027d1334d0bcad92433364efa7056ed8200bee-merged.mount: Deactivated successfully.
Jan 20 14:02:57 np0005589310 podman[80006]: 2026-01-20 19:02:57.987047438 +0000 UTC m=+0.159474077 container remove 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:02:57 np0005589310 systemd[1]: libpod-conmon-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope: Deactivated successfully.
Jan 20 14:02:58 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:58 np0005589310 python3[80099]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=status _async_dir=/root/.ansible_async
Jan 20 14:02:58 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:58 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:58 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 14:02:58 np0005589310 ceph-mon[75120]: Deploying daemon crash.compute-0 on compute-0
Jan 20 14:02:58 np0005589310 systemd[1]: Reloading.
Jan 20 14:02:58 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:02:58 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:02:58 np0005589310 python3[80185]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 14:02:59 np0005589310 systemd[1]: Starting Ceph crash.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:02:59 np0005589310 podman[80304]: 2026-01-20 19:02:59.215337889 +0000 UTC m=+0.023393986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:02:59 np0005589310 python3[80299]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 14:02:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:02:59 np0005589310 python3[80344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:02:59 np0005589310 podman[80304]: 2026-01-20 19:02:59.913802057 +0000 UTC m=+0.721858154 container create 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:03:00.018470099 +0000 UTC m=+0.177335776 container create e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 podman[80304]: 2026-01-20 19:03:00.041814681 +0000 UTC m=+0.849870738 container init 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 14:03:00 np0005589310 podman[80304]: 2026-01-20 19:03:00.050169953 +0000 UTC m=+0.858225990 container start 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:03:00 np0005589310 bash[80304]: 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde
Jan 20 14:03:00 np0005589310 systemd[1]: Started libpod-conmon-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope.
Jan 20 14:03:00 np0005589310 systemd[1]: Started Ceph crash.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:02:59.997248629 +0000 UTC m=+0.156114336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.207+0000 7fefedc6a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.207+0000 7fefedc6a640 -1 AuthRegistry(0x7fefe8052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.208+0000 7fefedc6a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.208+0000 7fefedc6a640 -1 AuthRegistry(0x7fefedc68fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.209+0000 7fefe77fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.209+0000 7fefedc6a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 20 14:03:00 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:03:00.372296529 +0000 UTC m=+0.531162236 container init e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:03:00.381824367 +0000 UTC m=+0.540690064 container start e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:03:00.3860438 +0000 UTC m=+0.544909467 container attach e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1))
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1)) in 3 seconds
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2))
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:00 np0005589310 ansible-async_wrapper.py[79499]: Done in kid B.
Jan 20 14:03:00 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:03:00 np0005589310 focused_davinci[80367]: 
Jan 20 14:03:00 np0005589310 focused_davinci[80367]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 14:03:00 np0005589310 systemd[1]: libpod-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope: Deactivated successfully.
Jan 20 14:03:00 np0005589310 podman[80345]: 2026-01-20 19:03:00.912823375 +0000 UTC m=+1.071689042 container died e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 14:03:00 np0005589310 ceph-mon[75120]: Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 14:03:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0-merged.mount: Deactivated successfully.
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:00.911527342 +0000 UTC m=+0.027401378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.011028366 +0000 UTC m=+0.126902392 container create aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:01 np0005589310 podman[80345]: 2026-01-20 19:03:01.016035627 +0000 UTC m=+1.174901294 container remove e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:01 np0005589310 systemd[1]: libpod-conmon-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope: Deactivated successfully.
Jan 20 14:03:01 np0005589310 systemd[1]: Started libpod-conmon-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope.
Jan 20 14:03:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.149552126 +0000 UTC m=+0.265426162 container init aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.155985116 +0000 UTC m=+0.271859132 container start aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:03:01 np0005589310 quirky_solomon[80524]: 167 167
Jan 20 14:03:01 np0005589310 systemd[1]: libpod-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope: Deactivated successfully.
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.159491664 +0000 UTC m=+0.275365680 container attach aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.160731133 +0000 UTC m=+0.276605159 container died aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:01 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ace5946be003c75ae06bfb55f8b4f6a0435ef3c1e73c1d53303888219c87ed34-merged.mount: Deactivated successfully.
Jan 20 14:03:01 np0005589310 podman[80491]: 2026-01-20 19:03:01.197170565 +0000 UTC m=+0.313044581 container remove aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:03:01 np0005589310 systemd[1]: libpod-conmon-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope: Deactivated successfully.
Jan 20 14:03:01 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:01 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:01 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:01 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:01 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:01 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:01 np0005589310 python3[80603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:01 np0005589310 podman[80641]: 2026-01-20 19:03:01.734834283 +0000 UTC m=+0.102964332 container create 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:01 np0005589310 podman[80641]: 2026-01-20 19:03:01.656768789 +0000 UTC m=+0.024898848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:01 np0005589310 systemd[1]: Started libpod-conmon-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope.
Jan 20 14:03:01 np0005589310 systemd[1]: Starting Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:03:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:01 np0005589310 podman[80641]: 2026-01-20 19:03:01.979135907 +0000 UTC m=+0.347265966 container init 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:03:01 np0005589310 podman[80641]: 2026-01-20 19:03:01.986281001 +0000 UTC m=+0.354411040 container start 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:03:02 np0005589310 podman[80641]: 2026-01-20 19:03:02.000466003 +0000 UTC m=+0.368596112 container attach 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:03:02 np0005589310 podman[80713]: 2026-01-20 19:03:02.159379832 +0000 UTC m=+0.051074335 container create 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:03:02 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:02 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:02 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:02 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/lib/ceph/mgr/ceph-compute-0.fpkyqm supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:02 np0005589310 podman[80713]: 2026-01-20 19:03:02.228657323 +0000 UTC m=+0.120351856 container init 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 14:03:02 np0005589310 podman[80713]: 2026-01-20 19:03:02.133445606 +0000 UTC m=+0.025140119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:02 np0005589310 podman[80713]: 2026-01-20 19:03:02.234662182 +0000 UTC m=+0.126356685 container start 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:02 np0005589310 bash[80713]: 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c
Jan 20 14:03:02 np0005589310 systemd[1]: Started Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: pidfile_write: ignore empty --pid-file
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'alerts'
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'balancer'
Jan 20 14:03:02 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/688908581' entity='client.admin' 
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:02 np0005589310 systemd[1]: libpod-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope: Deactivated successfully.
Jan 20 14:03:02 np0005589310 podman[80641]: 2026-01-20 19:03:02.51627529 +0000 UTC m=+0.884405339 container died 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:03:02 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'cephadm'
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:02 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2))
Jan 20 14:03:02 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:03:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414-merged.mount: Deactivated successfully.
Jan 20 14:03:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:02 np0005589310 podman[80641]: 2026-01-20 19:03:02.670516965 +0000 UTC m=+1.038647004 container remove 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:02 np0005589310 systemd[1]: libpod-conmon-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope: Deactivated successfully.
Jan 20 14:03:02 np0005589310 python3[80884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:03 np0005589310 podman[80896]: 2026-01-20 19:03:03.040897472 +0000 UTC m=+0.022232420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:03 np0005589310 podman[80896]: 2026-01-20 19:03:03.174377408 +0000 UTC m=+0.155712336 container create 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:03:03 np0005589310 systemd[1]: Started libpod-conmon-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope.
Jan 20 14:03:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:03 np0005589310 podman[80896]: 2026-01-20 19:03:03.280546073 +0000 UTC m=+0.261881021 container init 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:03:03 np0005589310 podman[80896]: 2026-01-20 19:03:03.287723188 +0000 UTC m=+0.269058116 container start 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:03 np0005589310 podman[80896]: 2026-01-20 19:03:03.293488975 +0000 UTC m=+0.274823903 container attach 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:03 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'crash'
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/688908581' entity='client.admin' 
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:03 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'dashboard'
Jan 20 14:03:03 np0005589310 podman[80957]: 2026-01-20 19:03:03.496711484 +0000 UTC m=+0.194389795 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:03:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:03 np0005589310 podman[80957]: 2026-01-20 19:03:03.665753101 +0000 UTC m=+0.363431412 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3844776602' entity='client.admin' 
Jan 20 14:03:04 np0005589310 systemd[1]: libpod-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope: Deactivated successfully.
Jan 20 14:03:04 np0005589310 podman[80896]: 2026-01-20 19:03:04.140067094 +0000 UTC m=+1.121402052 container died 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay-bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53-merged.mount: Deactivated successfully.
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 podman[80896]: 2026-01-20 19:03:04.214927572 +0000 UTC m=+1.196262500 container remove 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:04 np0005589310 systemd[1]: libpod-conmon-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope: Deactivated successfully.
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'devicehealth'
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 2 completed events
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3844776602' entity='client.admin' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 20 14:03:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:04 np0005589310 python3[81206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:04 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 14:03:04 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 14:03:04 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]:  from numpy import show_config as show_numpy_config
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'influx'
Jan 20 14:03:04 np0005589310 podman[81207]: 2026-01-20 19:03:04.620825296 +0000 UTC m=+0.039556612 container create 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:03:04 np0005589310 systemd[1]: Started libpod-conmon-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope.
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'insights'
Jan 20 14:03:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:04 np0005589310 podman[81207]: 2026-01-20 19:03:04.603946295 +0000 UTC m=+0.022677631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:04 np0005589310 podman[81207]: 2026-01-20 19:03:04.75092273 +0000 UTC m=+0.169654066 container init 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'iostat'
Jan 20 14:03:04 np0005589310 podman[81207]: 2026-01-20 19:03:04.782877077 +0000 UTC m=+0.201608393 container start 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 14:03:04 np0005589310 podman[81207]: 2026-01-20 19:03:04.786260649 +0000 UTC m=+0.204991965 container attach 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.816774166 +0000 UTC m=+0.088300626 container create 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:04 np0005589310 systemd[1]: Started libpod-conmon-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope.
Jan 20 14:03:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.792013415 +0000 UTC m=+0.063539905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.894511974 +0000 UTC m=+0.166038464 container init 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.900737593 +0000 UTC m=+0.172264053 container start 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:04 np0005589310 elastic_tharp[81257]: 167 167
Jan 20 14:03:04 np0005589310 systemd[1]: libpod-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope: Deactivated successfully.
Jan 20 14:03:04 np0005589310 conmon[81257]: conmon 78414330d95eb7e3faa5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope/container/memory.events
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.9052609 +0000 UTC m=+0.176787390 container attach 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.906224546 +0000 UTC m=+0.177751026 container died 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:04 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'k8sevents'
Jan 20 14:03:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay-962001b968761b6260d041ea7165dc1f862254fdebcb384ce388978f7edfede1-merged.mount: Deactivated successfully.
Jan 20 14:03:04 np0005589310 podman[81239]: 2026-01-20 19:03:04.947578585 +0000 UTC m=+0.219105045 container remove 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 14:03:04 np0005589310 systemd[1]: libpod-conmon-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope: Deactivated successfully.
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:05 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 14:03:05 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:05 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 14:03:05 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 20 14:03:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 20 14:03:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:05 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'localpool'
Jan 20 14:03:05 np0005589310 podman[81363]: 2026-01-20 19:03:05.534226518 +0000 UTC m=+0.032007510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:05 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'mirroring'
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'nfs'
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.17347561 +0000 UTC m=+0.671256582 container create 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 20 14:03:06 np0005589310 friendly_murdock[81234]: set require_min_compat_client to mimic
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 20 14:03:06 np0005589310 systemd[1]: Started libpod-conmon-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope.
Jan 20 14:03:06 np0005589310 systemd[1]: libpod-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope: Deactivated successfully.
Jan 20 14:03:06 np0005589310 conmon[81234]: conmon 3302e0e7ab64dc38c761 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope/container/memory.events
Jan 20 14:03:06 np0005589310 podman[81207]: 2026-01-20 19:03:06.22048177 +0000 UTC m=+1.639213106 container died 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:03:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b-merged.mount: Deactivated successfully.
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.273015685 +0000 UTC m=+0.770796667 container init 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:03:06 np0005589310 podman[81207]: 2026-01-20 19:03:06.279691026 +0000 UTC m=+1.698422342 container remove 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.283674438 +0000 UTC m=+0.781455410 container start 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.28684389 +0000 UTC m=+0.784624882 container attach 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:03:06 np0005589310 great_mestorf[81381]: 167 167
Jan 20 14:03:06 np0005589310 systemd[1]: libpod-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope: Deactivated successfully.
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.290489586 +0000 UTC m=+0.788270558 container died 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 14:03:06 np0005589310 systemd[1]: libpod-conmon-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope: Deactivated successfully.
Jan 20 14:03:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e7afb7e98c7f8ed49957957176e65b5345c4400f368b229e1f24599b93ef6c5c-merged.mount: Deactivated successfully.
Jan 20 14:03:06 np0005589310 podman[81363]: 2026-01-20 19:03:06.334448828 +0000 UTC m=+0.832229820 container remove 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:06 np0005589310 systemd[1]: libpod-conmon-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope: Deactivated successfully.
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:06 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'orchestrator'
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'osd_support'
Jan 20 14:03:06 np0005589310 python3[81502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:06 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 14:03:06 np0005589310 podman[81532]: 2026-01-20 19:03:06.965630452 +0000 UTC m=+0.058839119 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:07 np0005589310 podman[81534]: 2026-01-20 19:03:06.950803809 +0000 UTC m=+0.030952949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:07 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'progress'
Jan 20 14:03:07 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'prometheus'
Jan 20 14:03:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:07 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'rbd_support'
Jan 20 14:03:07 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'rgw'
Jan 20 14:03:08 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'rook'
Jan 20 14:03:08 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:08 np0005589310 podman[81534]: 2026-01-20 19:03:08.450734378 +0000 UTC m=+1.530883518 container create 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:08 np0005589310 systemd[1]: Started libpod-conmon-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope.
Jan 20 14:03:08 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:08 np0005589310 podman[81532]: 2026-01-20 19:03:08.526308081 +0000 UTC m=+1.619516718 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:03:08 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'selftest'
Jan 20 14:03:08 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'smb'
Jan 20 14:03:09 np0005589310 podman[81534]: 2026-01-20 19:03:09.032653024 +0000 UTC m=+2.112802204 container init 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:09 np0005589310 podman[81534]: 2026-01-20 19:03:09.039670962 +0000 UTC m=+2.119820102 container start 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:09 np0005589310 podman[81534]: 2026-01-20 19:03:09.043414742 +0000 UTC m=+2.123563942 container attach 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'snap_schedule'
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'stats'
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'status'
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'telegraf'
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'telemetry'
Jan 20 14:03:09 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:03:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:03:09 np0005589310 ceph-mgr[80749]: mgr[py] Loading python module 'volumes'
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Added host compute-0
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1))
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 distracted_lehmann[81579]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 14:03:10 np0005589310 distracted_lehmann[81579]: Scheduled mon update...
Jan 20 14:03:10 np0005589310 distracted_lehmann[81579]: Scheduled mgr update...
Jan 20 14:03:10 np0005589310 distracted_lehmann[81579]: Scheduled osd.default_drive_group update...
Jan 20 14:03:10 np0005589310 systemd[1]: libpod-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope: Deactivated successfully.
Jan 20 14:03:10 np0005589310 podman[81534]: 2026-01-20 19:03:10.150229195 +0000 UTC m=+3.230378335 container died 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:03:10 np0005589310 ceph-mgr[80749]: ms_deliver_dispatch: unhandled message 0x55653fa32000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.fpkyqm started
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from mgr.compute-0.fpkyqm 192.168.122.100:0/611470075; not ready for session (expect reconnect)
Jan 20 14:03:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb-merged.mount: Deactivated successfully.
Jan 20 14:03:10 np0005589310 podman[81534]: 2026-01-20 19:03:10.304566592 +0000 UTC m=+3.384715732 container remove 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:10 np0005589310 systemd[1]: libpod-conmon-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope: Deactivated successfully.
Jan 20 14:03:10 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:10 np0005589310 systemd[1]: Stopping Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Added host compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Saving service mon spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Saving service mgr spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Saving service osd.default_drive_group spec with placement compute-0
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 14:03:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:10 np0005589310 python3[81904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:10 np0005589310 podman[81912]: 2026-01-20 19:03:10.796254365 +0000 UTC m=+0.114201992 container died 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:10 np0005589310 podman[81928]: 2026-01-20 19:03:10.846300467 +0000 UTC m=+0.059423066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd-merged.mount: Deactivated successfully.
Jan 20 14:03:10 np0005589310 podman[81928]: 2026-01-20 19:03:10.982338438 +0000 UTC m=+0.195461017 container create 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:11 np0005589310 podman[81912]: 2026-01-20 19:03:11.082614616 +0000 UTC m=+0.400562243 container remove 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:03:11 np0005589310 bash[81912]: ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm
Jan 20 14:03:11 np0005589310 systemd[1]: Started libpod-conmon-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope.
Jan 20 14:03:11 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:11 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:11 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:11 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:11 np0005589310 podman[81928]: 2026-01-20 19:03:11.178645394 +0000 UTC m=+0.391768003 container init 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.meyjbf(active, since 40s), standbys: compute-0.fpkyqm
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fpkyqm", "id": "compute-0.fpkyqm"} v 0)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.fpkyqm", "id": "compute-0.fpkyqm"} : dispatch
Jan 20 14:03:11 np0005589310 podman[81928]: 2026-01-20 19:03:11.189177715 +0000 UTC m=+0.402300284 container start 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:03:11 np0005589310 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Main process exited, code=exited, status=143/n/a
Jan 20 14:03:11 np0005589310 podman[81928]: 2026-01-20 19:03:11.193559449 +0000 UTC m=+0.406682028 container attach 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:03:11 np0005589310 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Failed with result 'exit-code'.
Jan 20 14:03:11 np0005589310 systemd[1]: Stopped Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:11 np0005589310 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Consumed 8.887s CPU time, 464.4M memory peak, read 0B from disk, written 832.5K to disk.
Jan 20 14:03:11 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:11 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:11 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:11 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.fpkyqm
Jan 20 14:03:11 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.fpkyqm
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} v 0)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} : dispatch
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668631957' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"}]': finished
Jan 20 14:03:11 np0005589310 frosty_kilby[81958]: 
Jan 20 14:03:11 np0005589310 frosty_kilby[81958]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":64,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-20T19:02:04:930609+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-20T19:02:04.932596+0000","services":{}},"progress_events":{}}
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:11 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1))
Jan 20 14:03:11 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 14:03:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:11 np0005589310 systemd[1]: libpod-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope: Deactivated successfully.
Jan 20 14:03:11 np0005589310 conmon[81958]: conmon 779e7814d2a5f361bc26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope/container/memory.events
Jan 20 14:03:11 np0005589310 podman[81928]: 2026-01-20 19:03:11.803159861 +0000 UTC m=+1.016282440 container died 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:03:11 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c-merged.mount: Deactivated successfully.
Jan 20 14:03:11 np0005589310 podman[81928]: 2026-01-20 19:03:11.895911 +0000 UTC m=+1.109033579 container remove 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:11 np0005589310 systemd[1]: libpod-conmon-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope: Deactivated successfully.
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: Removing key for mgr.compute-0.fpkyqm
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} : dispatch
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"}]': finished
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 podman[82183]: 2026-01-20 19:03:12.423419226 +0000 UTC m=+0.074889015 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:12 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:12 np0005589310 podman[82183]: 2026-01-20 19:03:12.546692993 +0000 UTC m=+0.198162742 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.302016375 +0000 UTC m=+0.021811760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.403542614 +0000 UTC m=+0.123337989 container create 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:13 np0005589310 systemd[1]: Started libpod-conmon-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope.
Jan 20 14:03:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:13 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.620350379 +0000 UTC m=+0.340145824 container init 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.628647747 +0000 UTC m=+0.348443102 container start 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:03:13 np0005589310 systemd[1]: libpod-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope: Deactivated successfully.
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.633103343 +0000 UTC m=+0.352898728 container attach 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:13 np0005589310 suspicious_darwin[82357]: 167 167
Jan 20 14:03:13 np0005589310 conmon[82357]: conmon 2ec9da98f0bad3fe8418 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope/container/memory.events
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.634540877 +0000 UTC m=+0.354336242 container died 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:03:13 np0005589310 systemd[1]: var-lib-containers-storage-overlay-41d91164838d8e2e269d7742be9db0a41b54cb5cf7e44756079f6eef0eca6a19-merged.mount: Deactivated successfully.
Jan 20 14:03:13 np0005589310 podman[82341]: 2026-01-20 19:03:13.685787607 +0000 UTC m=+0.405582952 container remove 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:03:13 np0005589310 systemd[1]: libpod-conmon-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope: Deactivated successfully.
Jan 20 14:03:13 np0005589310 podman[82380]: 2026-01-20 19:03:13.832667316 +0000 UTC m=+0.024382921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:14 np0005589310 podman[82380]: 2026-01-20 19:03:14.065524204 +0000 UTC m=+0.257239799 container create 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:03:14 np0005589310 systemd[1]: Started libpod-conmon-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope.
Jan 20 14:03:14 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:14 np0005589310 podman[82380]: 2026-01-20 19:03:14.297261184 +0000 UTC m=+0.488976839 container init 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:03:14 np0005589310 podman[82380]: 2026-01-20 19:03:14.31051688 +0000 UTC m=+0.502232445 container start 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:03:14 np0005589310 podman[82380]: 2026-01-20 19:03:14.315520829 +0000 UTC m=+0.507236384 container attach 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:14 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:14 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 3 completed events
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:03:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ea83dc26-7f71-429f-b9c1-f87c51d6aebb
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} v 0)
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} : dispatch
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"}]': finished
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:15 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:15 np0005589310 lvm[82488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:15 np0005589310 lvm[82488]: VG ceph_vg0 finished
Jan 20 14:03:15 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 20 14:03:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 14:03:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025274123' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 14:03:16 np0005589310 condescending_blackburn[82396]: stderr: got monmap epoch 1
Jan 20 14:03:16 np0005589310 condescending_blackburn[82396]: --> Creating keyring file for osd.0
Jan 20 14:03:16 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 20 14:03:16 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 20 14:03:16 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ea83dc26-7f71-429f-b9c1-f87c51d6aebb --setuser ceph --setgroup ceph
Jan 20 14:03:16 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:16 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} : dispatch
Jan 20 14:03:16 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"}]': finished
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:16.357+0000 7fdacccdf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:16.381+0000 7fdacccdf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:17 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new aba2c458-fbc4-4039-bc23-d828faa8f69c
Jan 20 14:03:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: Cluster is now healthy
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} v 0)
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} : dispatch
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"}]': finished
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:17 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:17 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:18 np0005589310 lvm[83424]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:18 np0005589310 lvm[83424]: VG ceph_vg1 finished
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 20 14:03:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:18 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 14:03:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401551903' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: stderr: got monmap epoch 1
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: --> Creating keyring file for osd.1
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 20 14:03:18 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid aba2c458-fbc4-4039-bc23-d828faa8f69c --setuser ceph --setgroup ceph
Jan 20 14:03:18 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} : dispatch
Jan 20 14:03:18 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"}]': finished
Jan 20 14:03:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:19 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:18.829+0000 7fb2b71d58c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 20 14:03:19 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:18.856+0000 7fb2b71d58c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 20 14:03:19 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f12cccca-abeb-4720-98f5-dcecf6096427
Jan 20 14:03:20 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} v 0)
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} : dispatch
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"}]': finished
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:20 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:20 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:20 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:20 np0005589310 lvm[84362]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:20 np0005589310 lvm[84362]: VG ceph_vg2 finished
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} : dispatch
Jan 20 14:03:20 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"}]': finished
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:20 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 20 14:03:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 14:03:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513095021' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 14:03:21 np0005589310 condescending_blackburn[82396]: stderr: got monmap epoch 1
Jan 20 14:03:21 np0005589310 condescending_blackburn[82396]: --> Creating keyring file for osd.2
Jan 20 14:03:21 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 20 14:03:21 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 20 14:03:21 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid f12cccca-abeb-4720-98f5-dcecf6096427 --setuser ceph --setgroup ceph
Jan 20 14:03:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:22 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:21.743+0000 7f90a125c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: stderr: 2026-01-20T19:03:21.769+0000 7f90a125c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 14:03:23 np0005589310 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 20 14:03:23 np0005589310 systemd[1]: libpod-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Deactivated successfully.
Jan 20 14:03:23 np0005589310 systemd[1]: libpod-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Consumed 6.115s CPU time.
Jan 20 14:03:23 np0005589310 podman[85275]: 2026-01-20 19:03:23.814185221 +0000 UTC m=+0.033674743 container died 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:03:23 np0005589310 systemd[1]: var-lib-containers-storage-overlay-847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8-merged.mount: Deactivated successfully.
Jan 20 14:03:23 np0005589310 podman[85275]: 2026-01-20 19:03:23.860589896 +0000 UTC m=+0.080079308 container remove 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:03:23 np0005589310 systemd[1]: libpod-conmon-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Deactivated successfully.
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.323637806 +0000 UTC m=+0.020146220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.43964534 +0000 UTC m=+0.136153734 container create 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:24 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:24 np0005589310 systemd[1]: Started libpod-conmon-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope.
Jan 20 14:03:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.686076979 +0000 UTC m=+0.382585393 container init 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.693625549 +0000 UTC m=+0.390133943 container start 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.697134873 +0000 UTC m=+0.393643267 container attach 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:03:24 np0005589310 musing_ardinghelli[85370]: 167 167
Jan 20 14:03:24 np0005589310 systemd[1]: libpod-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope: Deactivated successfully.
Jan 20 14:03:24 np0005589310 podman[85353]: 2026-01-20 19:03:24.699435158 +0000 UTC m=+0.395943552 container died 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3c7e31ecc7fe9322495a536cbe225bf310f197c59ba4531a404162273a8dfff1-merged.mount: Deactivated successfully.
Jan 20 14:03:25 np0005589310 podman[85353]: 2026-01-20 19:03:25.208424322 +0000 UTC m=+0.904932716 container remove 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:25 np0005589310 systemd[1]: libpod-conmon-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope: Deactivated successfully.
Jan 20 14:03:25 np0005589310 podman[85394]: 2026-01-20 19:03:25.354194605 +0000 UTC m=+0.027424094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:25 np0005589310 podman[85394]: 2026-01-20 19:03:25.771326981 +0000 UTC m=+0.444556430 container create cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:03:25 np0005589310 systemd[1]: Started libpod-conmon-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope.
Jan 20 14:03:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:25 np0005589310 podman[85394]: 2026-01-20 19:03:25.877707205 +0000 UTC m=+0.550936664 container init cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:25 np0005589310 podman[85394]: 2026-01-20 19:03:25.885506371 +0000 UTC m=+0.558735810 container start cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 20 14:03:25 np0005589310 podman[85394]: 2026-01-20 19:03:25.889730912 +0000 UTC m=+0.562960361 container attach cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]: {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    "0": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "devices": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "/dev/loop3"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            ],
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_name": "ceph_lv0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_size": "21470642176",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "name": "ceph_lv0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "tags": {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.crush_device_class": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.encrypted": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_id": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.vdo": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.with_tpm": "0"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            },
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "vg_name": "ceph_vg0"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        }
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    ],
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    "1": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "devices": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "/dev/loop4"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            ],
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_name": "ceph_lv1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_size": "21470642176",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "name": "ceph_lv1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "tags": {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.crush_device_class": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.encrypted": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_id": "1",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.vdo": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.with_tpm": "0"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            },
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "vg_name": "ceph_vg1"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        }
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    ],
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    "2": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "devices": [
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "/dev/loop5"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            ],
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_name": "ceph_lv2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_size": "21470642176",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "name": "ceph_lv2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "tags": {
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.crush_device_class": "",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.encrypted": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osd_id": "2",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.vdo": "0",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:                "ceph.with_tpm": "0"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            },
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "type": "block",
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:            "vg_name": "ceph_vg2"
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:        }
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]:    ]
Jan 20 14:03:26 np0005589310 beautiful_noether[85411]: }
Jan 20 14:03:26 np0005589310 systemd[1]: libpod-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope: Deactivated successfully.
Jan 20 14:03:26 np0005589310 podman[85394]: 2026-01-20 19:03:26.183629353 +0000 UTC m=+0.856858802 container died cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:03:26 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc-merged.mount: Deactivated successfully.
Jan 20 14:03:26 np0005589310 podman[85394]: 2026-01-20 19:03:26.232822874 +0000 UTC m=+0.906052333 container remove cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:03:26 np0005589310 systemd[1]: libpod-conmon-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope: Deactivated successfully.
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:26 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 20 14:03:26 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 20 14:03:26 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:26 np0005589310 podman[85521]: 2026-01-20 19:03:26.780890279 +0000 UTC m=+0.022642540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:26 np0005589310 podman[85521]: 2026-01-20 19:03:26.969524513 +0000 UTC m=+0.211276694 container create c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 20 14:03:26 np0005589310 ceph-mon[75120]: Deploying daemon osd.0 on compute-0
Jan 20 14:03:27 np0005589310 systemd[1]: Started libpod-conmon-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope.
Jan 20 14:03:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:27 np0005589310 podman[85521]: 2026-01-20 19:03:27.083970329 +0000 UTC m=+0.325722500 container init c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:03:27 np0005589310 podman[85521]: 2026-01-20 19:03:27.092475342 +0000 UTC m=+0.334227553 container start c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:03:27 np0005589310 kind_faraday[85537]: 167 167
Jan 20 14:03:27 np0005589310 systemd[1]: libpod-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope: Deactivated successfully.
Jan 20 14:03:27 np0005589310 conmon[85537]: conmon c368b7601c1575da914b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope/container/memory.events
Jan 20 14:03:27 np0005589310 podman[85521]: 2026-01-20 19:03:27.108577035 +0000 UTC m=+0.350329306 container attach c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:27 np0005589310 podman[85521]: 2026-01-20 19:03:27.109574809 +0000 UTC m=+0.351327000 container died c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 14:03:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay-414c3733188c5fd091018392748bea4279236703ccb1f57eea07d10045f47f40-merged.mount: Deactivated successfully.
Jan 20 14:03:27 np0005589310 podman[85521]: 2026-01-20 19:03:27.19945991 +0000 UTC m=+0.441212101 container remove c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:03:27 np0005589310 systemd[1]: libpod-conmon-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope: Deactivated successfully.
Jan 20 14:03:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.530989217 +0000 UTC m=+0.093066158 container create b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.466346227 +0000 UTC m=+0.028423208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:27 np0005589310 systemd[1]: Started libpod-conmon-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope.
Jan 20 14:03:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.620267074 +0000 UTC m=+0.182344045 container init b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.633167011 +0000 UTC m=+0.195243952 container start b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.661307311 +0000 UTC m=+0.223384282 container attach b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:03:27 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 14:03:27 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]:                            [--no-systemd] [--no-tmpfs]
Jan 20 14:03:27 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 14:03:27 np0005589310 systemd[1]: libpod-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope: Deactivated successfully.
Jan 20 14:03:27 np0005589310 podman[85567]: 2026-01-20 19:03:27.829755163 +0000 UTC m=+0.391832094 container died b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:03:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37-merged.mount: Deactivated successfully.
Jan 20 14:03:28 np0005589310 podman[85567]: 2026-01-20 19:03:28.043848573 +0000 UTC m=+0.605925514 container remove b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:28 np0005589310 systemd[1]: libpod-conmon-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope: Deactivated successfully.
Jan 20 14:03:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:28 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:28 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:28 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:28 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:28 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:28 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:28 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:29 np0005589310 systemd[1]: Starting Ceph osd.0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:03:29 np0005589310 podman[85744]: 2026-01-20 19:03:29.439505629 +0000 UTC m=+0.057534442 container create 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:29 np0005589310 podman[85744]: 2026-01-20 19:03:29.405433317 +0000 UTC m=+0.023462180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:29 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:29 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:29 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:29 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:29 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:29 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:29 np0005589310 podman[85744]: 2026-01-20 19:03:29.556506776 +0000 UTC m=+0.174535609 container init 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:03:29 np0005589310 podman[85744]: 2026-01-20 19:03:29.571971004 +0000 UTC m=+0.189999857 container start 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:29 np0005589310 podman[85744]: 2026-01-20 19:03:29.576451181 +0000 UTC m=+0.194480024 container attach 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:29 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:29 np0005589310 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:29 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:29 np0005589310 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:30 np0005589310 lvm[85844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:30 np0005589310 lvm[85844]: VG ceph_vg0 finished
Jan 20 14:03:30 np0005589310 lvm[85847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:30 np0005589310 lvm[85847]: VG ceph_vg1 finished
Jan 20 14:03:30 np0005589310 lvm[85849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:30 np0005589310 lvm[85849]: VG ceph_vg2 finished
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:30 np0005589310 bash[85744]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:30 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:30 np0005589310 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 14:03:30 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 14:03:30 np0005589310 bash[85744]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 14:03:30 np0005589310 systemd[1]: libpod-2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51.scope: Deactivated successfully.
Jan 20 14:03:30 np0005589310 podman[85744]: 2026-01-20 19:03:30.674838775 +0000 UTC m=+1.292867588 container died 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:30 np0005589310 systemd[1]: libpod-2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51.scope: Consumed 1.567s CPU time.
Jan 20 14:03:30 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7-merged.mount: Deactivated successfully.
Jan 20 14:03:30 np0005589310 podman[85744]: 2026-01-20 19:03:30.716136629 +0000 UTC m=+1.334165442 container remove 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:30 np0005589310 podman[86002]: 2026-01-20 19:03:30.928753784 +0000 UTC m=+0.039331429 container create eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:03:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:30 np0005589310 podman[86002]: 2026-01-20 19:03:30.983868736 +0000 UTC m=+0.094446401 container init eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:30 np0005589310 podman[86002]: 2026-01-20 19:03:30.994183232 +0000 UTC m=+0.104760877 container start eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:03:30 np0005589310 bash[86002]: eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d
Jan 20 14:03:30 np0005589310 podman[86002]: 2026-01-20 19:03:30.911279357 +0000 UTC m=+0.021857032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:31 np0005589310 systemd[1]: Started Ceph osd.0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: pidfile_write: ignore empty --pid-file
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: load: jerasure load: lrc 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount shared_bdev_used = 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Git sha 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DB SUMMARY
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DB Session ID:  2LYYGZSRKWTX2JVYO344
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                     Options.env: 0x561427633ea0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                Options.info_log: 0x5614286848a0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.write_buffer_manager: 0x561427698b40
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Compression algorithms supported:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d9c11cee-4e1e-4d55-b52b-c650acf03792
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811464469, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811466545, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: freelist init
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: freelist _read_cfg
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs umount
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluefs mount shared_bdev_used = 27262976
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Git sha 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DB SUMMARY
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DB Session ID:  2LYYGZSRKWTX2JVYO345
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                     Options.env: 0x561427633ce0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                Options.info_log: 0x5614287112a0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.write_buffer_manager: 0x561427699900
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Compression algorithms supported:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561427637a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276374b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276374b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614276374b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d9c11cee-4e1e-4d55-b52b-c650acf03792
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811505434, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811509961, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:03:31
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:03:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811513957, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811517685, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811519142, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56142888dc00
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: DB pointer 0x56142883e000
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: _get_class not permitted to load lua
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: _get_class not permitted to load sdk
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 load_pgs
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 load_pgs opened 0 pgs
Jan 20 14:03:31 np0005589310 ceph-osd[86022]: osd.0 0 log_to_monitors true
Jan 20 14:03:31 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:31.544+0000 7f1d61cfc8c0 -1 osd.0 0 log_to_monitors true
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 20 14:03:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.581267847 +0000 UTC m=+0.040239790 container create 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:03:31 np0005589310 systemd[1]: Started libpod-conmon-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope.
Jan 20 14:03:31 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.562805746 +0000 UTC m=+0.021777679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.663441924 +0000 UTC m=+0.122413907 container init 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.670332198 +0000 UTC m=+0.129304141 container start 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.67377642 +0000 UTC m=+0.132748373 container attach 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:31 np0005589310 charming_almeida[86582]: 167 167
Jan 20 14:03:31 np0005589310 systemd[1]: libpod-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope: Deactivated successfully.
Jan 20 14:03:31 np0005589310 conmon[86582]: conmon 3fea38f4e66fd6846bd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope/container/memory.events
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.677090799 +0000 UTC m=+0.136062762 container died 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:31 np0005589310 systemd[1]: var-lib-containers-storage-overlay-dbe96ada11116fce507484985b464bb155f669caf4050d9072b6084a80d3a4b1-merged.mount: Deactivated successfully.
Jan 20 14:03:31 np0005589310 podman[86533]: 2026-01-20 19:03:31.714711955 +0000 UTC m=+0.173683878 container remove 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:31 np0005589310 systemd[1]: libpod-conmon-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope: Deactivated successfully.
Jan 20 14:03:31 np0005589310 podman[86612]: 2026-01-20 19:03:31.98150188 +0000 UTC m=+0.061162127 container create 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:03:32 np0005589310 systemd[1]: Started libpod-conmon-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope.
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:31.960128271 +0000 UTC m=+0.039788538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:32 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: Deploying daemon osd.1 on compute-0
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:32.093189111 +0000 UTC m=+0.172849368 container init 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 14:03:32 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:32 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:32 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:32.106711593 +0000 UTC m=+0.186371860 container start 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:32.111674421 +0000 UTC m=+0.191334668 container attach 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:03:32 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 14:03:32 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]:                            [--no-systemd] [--no-tmpfs]
Jan 20 14:03:32 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 14:03:32 np0005589310 systemd[1]: libpod-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope: Deactivated successfully.
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:32.330510984 +0000 UTC m=+0.410171221 container died 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:32 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f-merged.mount: Deactivated successfully.
Jan 20 14:03:32 np0005589310 podman[86612]: 2026-01-20 19:03:32.380760011 +0000 UTC m=+0.460420248 container remove 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:03:32 np0005589310 systemd[1]: libpod-conmon-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope: Deactivated successfully.
Jan 20 14:03:32 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 14:03:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 14:03:32 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:32 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:32 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:32 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:33 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:33 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 done with init, starting boot process
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 start_boot
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 14:03:33 np0005589310 ceph-osd[86022]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:33 np0005589310 systemd[1]: Starting Ceph osd.1 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:03:33 np0005589310 podman[86789]: 2026-01-20 19:03:33.417248911 +0000 UTC m=+0.040696731 container create d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:03:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:33 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:33 np0005589310 podman[86789]: 2026-01-20 19:03:33.401474675 +0000 UTC m=+0.024922545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:33 np0005589310 podman[86789]: 2026-01-20 19:03:33.522348834 +0000 UTC m=+0.145796684 container init d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:33 np0005589310 podman[86789]: 2026-01-20 19:03:33.527384074 +0000 UTC m=+0.150831894 container start d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:33 np0005589310 podman[86789]: 2026-01-20 19:03:33.547884012 +0000 UTC m=+0.171331832 container attach d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:03:33 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:33 np0005589310 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:33 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:33 np0005589310 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:34 np0005589310 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:34 np0005589310 lvm[86890]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:34 np0005589310 lvm[86890]: VG ceph_vg1 finished
Jan 20 14:03:34 np0005589310 lvm[86889]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:34 np0005589310 lvm[86889]: VG ceph_vg0 finished
Jan 20 14:03:34 np0005589310 lvm[86892]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:34 np0005589310 lvm[86892]: VG ceph_vg2 finished
Jan 20 14:03:34 np0005589310 lvm[86893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:34 np0005589310 lvm[86893]: VG ceph_vg0 finished
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:34 np0005589310 bash[86789]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:03:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:34 np0005589310 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 14:03:34 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 14:03:34 np0005589310 bash[86789]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 14:03:34 np0005589310 systemd[1]: libpod-d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb.scope: Deactivated successfully.
Jan 20 14:03:34 np0005589310 systemd[1]: libpod-d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb.scope: Consumed 1.547s CPU time.
Jan 20 14:03:34 np0005589310 podman[86989]: 2026-01-20 19:03:34.662209866 +0000 UTC m=+0.025217522 container died d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:03:34 np0005589310 systemd[1]: var-lib-containers-storage-overlay-adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652-merged.mount: Deactivated successfully.
Jan 20 14:03:34 np0005589310 podman[86989]: 2026-01-20 19:03:34.879307817 +0000 UTC m=+0.242315473 container remove d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:03:35 np0005589310 podman[87051]: 2026-01-20 19:03:35.08262245 +0000 UTC m=+0.038721302 container create bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:35 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:35 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:35 np0005589310 podman[87051]: 2026-01-20 19:03:35.067719286 +0000 UTC m=+0.023818168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:35 np0005589310 podman[87051]: 2026-01-20 19:03:35.190690655 +0000 UTC m=+0.146789597 container init bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:03:35 np0005589310 podman[87051]: 2026-01-20 19:03:35.20052904 +0000 UTC m=+0.156627932 container start bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 20 14:03:35 np0005589310 bash[87051]: bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643
Jan 20 14:03:35 np0005589310 systemd[1]: Started Ceph osd.1 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: pidfile_write: ignore empty --pid-file
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:35 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 20 14:03:35 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: load: jerasure load: lrc 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount shared_bdev_used = 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Git sha 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DB SUMMARY
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DB Session ID:  BJ7CSLXC1OMZX8UVKFMI
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                     Options.env: 0x5614d8d39ea0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x5614d9d8a8a0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x5614d8d9eb40
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42fb52ca-080c-4b0c-8916-488ff4bd7976
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815612736, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815614231, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: freelist init
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: freelist _read_cfg
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs umount
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluefs mount shared_bdev_used = 27262976
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Git sha 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DB SUMMARY
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DB Session ID:  BJ7CSLXC1OMZX8UVKFMJ
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                     Options.env: 0x5614d8d39d50
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x5614d9d8baa0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x5614d8d9f900
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3da30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5614d8d3d4b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42fb52ca-080c-4b0c-8916-488ff4bd7976
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815668202, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815684107, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815713144, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815716558, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815734088, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5614d9f93c00
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: DB pointer 0x5614d9f44000
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usag
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: _get_class not permitted to load lua
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: _get_class not permitted to load sdk
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 load_pgs
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 load_pgs opened 0 pgs
Jan 20 14:03:35 np0005589310 ceph-osd[87071]: osd.1 0 log_to_monitors true
Jan 20 14:03:35 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:35.802+0000 7f47b91368c0 -1 osd.1 0 log_to_monitors true
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 20 14:03:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 20 14:03:35 np0005589310 podman[87583]: 2026-01-20 19:03:35.832503353 +0000 UTC m=+0.020659133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: Deploying daemon osd.2 on compute-0
Jan 20 14:03:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 20 14:03:36 np0005589310 podman[87583]: 2026-01-20 19:03:36.928001508 +0000 UTC m=+1.116157268 container create 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:36 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:36 np0005589310 systemd[1]: Started libpod-conmon-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope.
Jan 20 14:03:37 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:37 np0005589310 podman[87583]: 2026-01-20 19:03:37.453912635 +0000 UTC m=+1.642068415 container init 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:03:37 np0005589310 podman[87583]: 2026-01-20 19:03:37.466436274 +0000 UTC m=+1.654592034 container start 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:03:37 np0005589310 youthful_carver[87625]: 167 167
Jan 20 14:03:37 np0005589310 systemd[1]: libpod-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope: Deactivated successfully.
Jan 20 14:03:37 np0005589310 conmon[87625]: conmon 759595342d1e8edd5b4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope/container/memory.events
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:37 np0005589310 podman[87583]: 2026-01-20 19:03:37.576878155 +0000 UTC m=+1.765033925 container attach 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:37 np0005589310 podman[87583]: 2026-01-20 19:03:37.577688064 +0000 UTC m=+1.765843824 container died 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:37 np0005589310 systemd[1]: var-lib-containers-storage-overlay-000ba12a3bf90d09ed4658d6187b68efae3a305154836583e30f67f7205d7208-merged.mount: Deactivated successfully.
Jan 20 14:03:37 np0005589310 podman[87583]: 2026-01-20 19:03:37.711300527 +0000 UTC m=+1.899456287 container remove 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:03:37 np0005589310 systemd[1]: libpod-conmon-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope: Deactivated successfully.
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 done with init, starting boot process
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 start_boot
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 14:03:37 np0005589310 ceph-osd[87071]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:37 np0005589310 podman[87656]: 2026-01-20 19:03:37.982137608 +0000 UTC m=+0.059350414 container create 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:37 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:37.949152523 +0000 UTC m=+0.026365359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:38 np0005589310 systemd[1]: Started libpod-conmon-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope.
Jan 20 14:03:38 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:38.118442866 +0000 UTC m=+0.195655672 container init 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:38.12495577 +0000 UTC m=+0.202168576 container start 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:03:38 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:38 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:38.149806152 +0000 UTC m=+0.227018988 container attach 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:03:38 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 14:03:38 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]:                            [--no-systemd] [--no-tmpfs]
Jan 20 14:03:38 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 14:03:38 np0005589310 systemd[1]: libpod-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope: Deactivated successfully.
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:38.32431425 +0000 UTC m=+0.401527036 container died 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:38 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d-merged.mount: Deactivated successfully.
Jan 20 14:03:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:38 np0005589310 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 14:03:38 np0005589310 podman[87656]: 2026-01-20 19:03:38.471153277 +0000 UTC m=+0.548366073 container remove 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:38 np0005589310 systemd[1]: libpod-conmon-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope: Deactivated successfully.
Jan 20 14:03:38 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:38 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:38 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:38 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 10.972 iops: 2808.890 elapsed_sec: 1.068
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [WRN] : OSD bench result of 2808.890266 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 0 waiting for initial osdmap
Jan 20 14:03:39 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:39.094+0000 7f1d5e490640 -1 osd.0 0 waiting for initial osdmap
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:39 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:39.159+0000 7f1d58a83640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 14:03:39 np0005589310 ceph-osd[86022]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 20 14:03:39 np0005589310 systemd[1]: Reloading.
Jan 20 14:03:39 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:03:39 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:03:39 np0005589310 systemd[1]: Starting Ceph osd.2 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 14:03:39 np0005589310 podman[87834]: 2026-01-20 19:03:39.841013778 +0000 UTC m=+0.080830037 container create ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:39 np0005589310 podman[87834]: 2026-01-20 19:03:39.812379865 +0000 UTC m=+0.052196154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:39 np0005589310 podman[87834]: 2026-01-20 19:03:39.930437928 +0000 UTC m=+0.170254207 container init ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:39 np0005589310 podman[87834]: 2026-01-20 19:03:39.936220576 +0000 UTC m=+0.176036835 container start ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:03:39 np0005589310 podman[87834]: 2026-01-20 19:03:39.963962357 +0000 UTC m=+0.203778616 container attach ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:39 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083] boot
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:40 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:40 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: OSD bench result of 2808.890266 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:40 np0005589310 ceph-osd[86022]: osd.0 11 state: booting -> active
Jan 20 14:03:40 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:40 np0005589310 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:40 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:40 np0005589310 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:40 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] creating mgr pool
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 20 14:03:40 np0005589310 lvm[87934]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:40 np0005589310 lvm[87934]: VG ceph_vg0 finished
Jan 20 14:03:40 np0005589310 lvm[87937]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:40 np0005589310 lvm[87937]: VG ceph_vg1 finished
Jan 20 14:03:40 np0005589310 lvm[87939]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:40 np0005589310 lvm[87939]: VG ceph_vg2 finished
Jan 20 14:03:40 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:40 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:41 np0005589310 bash[87834]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 20 14:03:41 np0005589310 ceph-osd[86022]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 14:03:41 np0005589310 ceph-osd[86022]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 20 14:03:41 np0005589310 ceph-osd[86022]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:41 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:41 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083] boot
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:41 np0005589310 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 14:03:41 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 14:03:41 np0005589310 bash[87834]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 14:03:41 np0005589310 systemd[1]: libpod-ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05.scope: Deactivated successfully.
Jan 20 14:03:41 np0005589310 systemd[1]: libpod-ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05.scope: Consumed 1.924s CPU time.
Jan 20 14:03:41 np0005589310 podman[87834]: 2026-01-20 19:03:41.302181083 +0000 UTC m=+1.541997362 container died ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5-merged.mount: Deactivated successfully.
Jan 20 14:03:41 np0005589310 podman[87834]: 2026-01-20 19:03:41.40824502 +0000 UTC m=+1.648061279 container remove ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 14:03:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 14:03:41 np0005589310 podman[88093]: 2026-01-20 19:03:41.701032184 +0000 UTC m=+0.069416304 container create d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 20 14:03:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:41 np0005589310 podman[88093]: 2026-01-20 19:03:41.769097506 +0000 UTC m=+0.137481636 container init d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:41 np0005589310 podman[88093]: 2026-01-20 19:03:41.677109874 +0000 UTC m=+0.045494004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:41 np0005589310 podman[88093]: 2026-01-20 19:03:41.775449597 +0000 UTC m=+0.143833697 container start d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:03:41 np0005589310 bash[88093]: d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797
Jan 20 14:03:41 np0005589310 systemd[1]: Started Ceph osd.2 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: pidfile_write: ignore empty --pid-file
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:41 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:41 np0005589310 ceph-osd[88112]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 20 14:03:41 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: load: jerasure load: lrc 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:42 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:42 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount shared_bdev_used = 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:42 np0005589310 python3[88212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Git sha 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DB SUMMARY
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DB Session ID:  56IM7OZ4MESAT1MG9R0Y
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                     Options.env: 0x5564ebd0fea0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                Options.info_log: 0x5564ecda08a0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.write_buffer_manager: 0x5564ebd74b40
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Compression algorithms supported:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd138d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e81d777e-bb5f-4cd7-b7f1-0f55caa3acea
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822247859, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822250067, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: freelist init
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: freelist _read_cfg
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs umount
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluefs mount shared_bdev_used = 27262976
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: RocksDB version: 7.9.2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Git sha 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DB SUMMARY
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DB Session ID:  56IM7OZ4MESAT1MG9R0Z
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: CURRENT file:  CURRENT
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.error_if_exists: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.create_if_missing: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                     Options.env: 0x5564ebd0fd50
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                Options.info_log: 0x5564ecda1b00
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.statistics: (nil)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.use_fsync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.db_log_dir: 
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.write_buffer_manager: 0x5564ebd75900
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.unordered_write: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.row_cache: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                              Options.wal_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.two_write_queues: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.wal_compression: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.atomic_flush: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_background_jobs: 4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_background_compactions: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_subcompactions: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.max_open_files: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Compression algorithms supported:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZSTD supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kXpressCompression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kBZip2Compression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kLZ4Compression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kZlibCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: #011kSnappyCompression supported: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd13a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd134b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd134b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 podman[88241]: 2026-01-20 19:03:42.324174098 +0000 UTC m=+0.071326481 container create 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5564ebd134b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e81d777e-bb5f-4cd7-b7f1-0f55caa3acea
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822317758, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822333724, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822349396, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822361122, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822370685, "job": 1, "event": "recovery_finished"}
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 14:03:42 np0005589310 systemd[1]: Started libpod-conmon-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope.
Jan 20 14:03:42 np0005589310 podman[88241]: 2026-01-20 19:03:42.299335927 +0000 UTC m=+0.046488340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.126 iops: 5920.277 elapsed_sec: 0.507
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [WRN] : OSD bench result of 5920.276596 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5564ecda3c00
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: DB pointer 0x5564ecf5a000
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 0 waiting for initial osdmap
Jan 20 14:03:42 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:42.414+0000 7f47b50b8640 -1 osd.1 0 waiting for initial osdmap
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usag
Jan 20 14:03:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: _get_class not permitted to load lua
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: _get_class not permitted to load sdk
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 load_pgs
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 load_pgs opened 0 pgs
Jan 20 14:03:42 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:42.435+0000 7f48d83058c0 -1 osd.2 0 log_to_monitors true
Jan 20 14:03:42 np0005589310 ceph-osd[88112]: osd.2 0 log_to_monitors true
Jan 20 14:03:42 np0005589310 podman[88241]: 2026-01-20 19:03:42.447964147 +0000 UTC m=+0.195116550 container init 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 20 14:03:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 20 14:03:42 np0005589310 podman[88241]: 2026-01-20 19:03:42.457594746 +0000 UTC m=+0.204747129 container start 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:42 np0005589310 podman[88241]: 2026-01-20 19:03:42.46533535 +0000 UTC m=+0.212487763 container attach 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 20 14:03:42 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:42.465+0000 7f47afebd640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:42 np0005589310 ceph-osd[87071]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.536562837 +0000 UTC m=+0.081992754 container create 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:42 np0005589310 systemd[1]: Started libpod-conmon-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope.
Jan 20 14:03:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.517868982 +0000 UTC m=+0.063298909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.611221465 +0000 UTC m=+0.156651422 container init 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.616579033 +0000 UTC m=+0.162008960 container start 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:03:42 np0005589310 reverent_golick[88686]: 167 167
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.620168029 +0000 UTC m=+0.165597956 container attach 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:42 np0005589310 systemd[1]: libpod-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope: Deactivated successfully.
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.632663037 +0000 UTC m=+0.178092964 container died 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:03:42 np0005589310 systemd[1]: var-lib-containers-storage-overlay-689326ad401e1c110c205c7454883a526610d83f02abda542c1c5a5b1153d62d-merged.mount: Deactivated successfully.
Jan 20 14:03:42 np0005589310 podman[88639]: 2026-01-20 19:03:42.698815532 +0000 UTC m=+0.244245459 container remove 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 20 14:03:42 np0005589310 systemd[1]: libpod-conmon-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope: Deactivated successfully.
Jan 20 14:03:42 np0005589310 podman[88728]: 2026-01-20 19:03:42.921100967 +0000 UTC m=+0.073765028 container create a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:42 np0005589310 systemd[1]: Started libpod-conmon-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope.
Jan 20 14:03:42 np0005589310 podman[88728]: 2026-01-20 19:03:42.895546168 +0000 UTC m=+0.048210299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:43 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:43 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 14:03:43 np0005589310 podman[88728]: 2026-01-20 19:03:43.018558878 +0000 UTC m=+0.171222959 container init a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:03:43 np0005589310 podman[88728]: 2026-01-20 19:03:43.025302349 +0000 UTC m=+0.177966380 container start a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:43 np0005589310 podman[88728]: 2026-01-20 19:03:43.028727471 +0000 UTC m=+0.181391532 container attach a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531939254' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 14:03:43 np0005589310 gracious_mcnulty[88629]: 
Jan 20 14:03:43 np0005589310 gracious_mcnulty[88629]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":95,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":13,"num_osds":3,"num_up_osds":1,"osd_up_since":1768935820,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447000576,"bytes_avail":21023641600,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-20T19:02:04:930609+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T19:03:35.512911+0000","services":{}},"progress_events":{}}
Jan 20 14:03:43 np0005589310 systemd[1]: libpod-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope: Deactivated successfully.
Jan 20 14:03:43 np0005589310 podman[88241]: 2026-01-20 19:03:43.105743616 +0000 UTC m=+0.852896019 container died 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 14:03:43 np0005589310 systemd[1]: var-lib-containers-storage-overlay-6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333-merged.mount: Deactivated successfully.
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 20 14:03:43 np0005589310 podman[88241]: 2026-01-20 19:03:43.157654982 +0000 UTC m=+0.904807365 container remove 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: OSD bench result of 5920.276596 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 20 14:03:43 np0005589310 systemd[1]: libpod-conmon-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope: Deactivated successfully.
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 20 14:03:43 np0005589310 ceph-osd[87071]: osd.1 14 state: booting -> active
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594] boot
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 14:03:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:43 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:43 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.meyjbf(active, since 72s)
Jan 20 14:03:43 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 14:03:43 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 14:03:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 14:03:43 np0005589310 python3[88816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:43 np0005589310 podman[88851]: 2026-01-20 19:03:43.70709235 +0000 UTC m=+0.041024658 container create ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:43 np0005589310 systemd[1]: Started libpod-conmon-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope.
Jan 20 14:03:43 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:43 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:43 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:43 np0005589310 lvm[88881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:43 np0005589310 lvm[88881]: VG ceph_vg0 finished
Jan 20 14:03:43 np0005589310 podman[88851]: 2026-01-20 19:03:43.688997708 +0000 UTC m=+0.022930036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:43 np0005589310 podman[88851]: 2026-01-20 19:03:43.78982714 +0000 UTC m=+0.123759458 container init ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:43 np0005589310 podman[88851]: 2026-01-20 19:03:43.796556291 +0000 UTC m=+0.130488599 container start ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:03:43 np0005589310 podman[88851]: 2026-01-20 19:03:43.800055994 +0000 UTC m=+0.133988302 container attach ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 14:03:43 np0005589310 lvm[88883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:43 np0005589310 lvm[88883]: VG ceph_vg1 finished
Jan 20 14:03:43 np0005589310 lvm[88885]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:43 np0005589310 lvm[88885]: VG ceph_vg2 finished
Jan 20 14:03:43 np0005589310 serene_merkle[88744]: {}
Jan 20 14:03:43 np0005589310 systemd[1]: libpod-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Deactivated successfully.
Jan 20 14:03:43 np0005589310 podman[88728]: 2026-01-20 19:03:43.946297498 +0000 UTC m=+1.098961519 container died a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:43 np0005589310 systemd[1]: libpod-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Consumed 1.471s CPU time.
Jan 20 14:03:43 np0005589310 systemd[1]: var-lib-containers-storage-overlay-dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b-merged.mount: Deactivated successfully.
Jan 20 14:03:43 np0005589310 podman[88728]: 2026-01-20 19:03:43.997743483 +0000 UTC m=+1.150407504 container remove a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:44 np0005589310 systemd[1]: libpod-conmon-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Deactivated successfully.
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 done with init, starting boot process
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 start_boot
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 14:03:44 np0005589310 ceph-osd[88112]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 20 14:03:44 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:44 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:44 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:44 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594] boot
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:44 np0005589310 podman[89040]: 2026-01-20 19:03:44.826601477 +0000 UTC m=+0.118112405 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:03:44 np0005589310 podman[89040]: 2026-01-20 19:03:44.93379782 +0000 UTC m=+0.225308728 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 20 14:03:45 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:45 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:45 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:45 np0005589310 gifted_shockley[88877]: pool 'vms' created
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:45 np0005589310 systemd[1]: libpod-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope: Deactivated successfully.
Jan 20 14:03:45 np0005589310 podman[88851]: 2026-01-20 19:03:45.236419749 +0000 UTC m=+1.570352067 container died ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:03:45 np0005589310 systemd[1]: var-lib-containers-storage-overlay-90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2-merged.mount: Deactivated successfully.
Jan 20 14:03:45 np0005589310 podman[88851]: 2026-01-20 19:03:45.371491777 +0000 UTC m=+1.705424085 container remove ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:45 np0005589310 systemd[1]: libpod-conmon-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope: Deactivated successfully.
Jan 20 14:03:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:45 np0005589310 python3[89226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:45 np0005589310 podman[89276]: 2026-01-20 19:03:45.806711824 +0000 UTC m=+0.035700692 container create 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:03:45 np0005589310 systemd[1]: Started libpod-conmon-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope.
Jan 20 14:03:45 np0005589310 podman[89276]: 2026-01-20 19:03:45.791237625 +0000 UTC m=+0.020226523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:45 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:45 np0005589310 podman[89276]: 2026-01-20 19:03:45.957751302 +0000 UTC m=+0.186740180 container init 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:46 np0005589310 podman[89276]: 2026-01-20 19:03:46.006662486 +0000 UTC m=+0.235651374 container start 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:03:46 np0005589310 podman[89276]: 2026-01-20 19:03:46.03955959 +0000 UTC m=+0.268548468 container attach 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.185396694 +0000 UTC m=+0.097534855 container create 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:46 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:46 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.140472884 +0000 UTC m=+0.052611065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:46 np0005589310 systemd[1]: Started libpod-conmon-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope.
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:46 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.308114827 +0000 UTC m=+0.220252998 container init 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.320891672 +0000 UTC m=+0.233029833 container start 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:03:46 np0005589310 hopeful_keldysh[89343]: 167 167
Jan 20 14:03:46 np0005589310 systemd[1]: libpod-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope: Deactivated successfully.
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.340759915 +0000 UTC m=+0.252898106 container attach 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.341166294 +0000 UTC m=+0.253304455 container died 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:46 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b6d4e4d5baee9bc5f88385937da4a12787c13c16670557bf9cbfdce58a789316-merged.mount: Deactivated successfully.
Jan 20 14:03:46 np0005589310 podman[89308]: 2026-01-20 19:03:46.443481421 +0000 UTC m=+0.355619592 container remove 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:03:46 np0005589310 systemd[1]: libpod-conmon-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope: Deactivated successfully.
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:46 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] creating main.db for devicehealth
Jan 20 14:03:46 np0005589310 podman[89367]: 2026-01-20 19:03:46.634953222 +0000 UTC m=+0.081028601 container create 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:46 np0005589310 podman[89367]: 2026-01-20 19:03:46.602429678 +0000 UTC m=+0.048505057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:46 np0005589310 systemd[1]: Started libpod-conmon-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope.
Jan 20 14:03:46 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:46 np0005589310 podman[89367]: 2026-01-20 19:03:46.766925156 +0000 UTC m=+0.213000535 container init 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:03:46 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 14:03:46 np0005589310 podman[89367]: 2026-01-20 19:03:46.776269479 +0000 UTC m=+0.222344858 container start 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:46 np0005589310 ceph-mgr[75417]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 14:03:46 np0005589310 podman[89367]: 2026-01-20 19:03:46.804157593 +0000 UTC m=+0.250232992 container attach 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 14:03:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 14:03:47 np0005589310 zen_swirles[89387]: [
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:    {
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "available": false,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "being_replaced": false,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "ceph_device_lvm": false,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "lsm_data": {},
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "lvs": [],
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "path": "/dev/sr0",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "rejected_reasons": [
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "Has a FileSystem",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "Insufficient space (<5GB)"
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        ],
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        "sys_api": {
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "actuators": null,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "device_nodes": [
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:                "sr0"
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            ],
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "devname": "sr0",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "human_readable_size": "482.00 KB",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "id_bus": "ata",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "model": "QEMU DVD-ROM",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "nr_requests": "2",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "parent": "/dev/sr0",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "partitions": {},
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "path": "/dev/sr0",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "removable": "1",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "rev": "2.5+",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "ro": "0",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "rotational": "1",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "sas_address": "",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "sas_device_handle": "",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "scheduler_mode": "mq-deadline",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "sectors": 0,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "sectorsize": "2048",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "size": 493568.0,
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "support_discard": "2048",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "type": "disk",
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:            "vendor": "QEMU"
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:        }
Jan 20 14:03:47 np0005589310 zen_swirles[89387]:    }
Jan 20 14:03:47 np0005589310 zen_swirles[89387]: ]
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:47 np0005589310 systemd[1]: libpod-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope: Deactivated successfully.
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 20 14:03:47 np0005589310 podman[89367]: 2026-01-20 19:03:47.363735863 +0000 UTC m=+0.809811272 container died 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:03:47 np0005589310 determined_keller[89291]: pool 'volumes' created
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.meyjbf(active, since 76s)
Jan 20 14:03:47 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:47 np0005589310 systemd[1]: libpod-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope: Deactivated successfully.
Jan 20 14:03:47 np0005589310 podman[89276]: 2026-01-20 19:03:47.393340148 +0000 UTC m=+1.622329046 container died 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:03:47 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b-merged.mount: Deactivated successfully.
Jan 20 14:03:47 np0005589310 podman[89367]: 2026-01-20 19:03:47.457898795 +0000 UTC m=+0.903974174 container remove 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:03:47 np0005589310 systemd[1]: libpod-conmon-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope: Deactivated successfully.
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v46: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:47 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675-merged.mount: Deactivated successfully.
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 14:03:47 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:03:47 np0005589310 podman[89276]: 2026-01-20 19:03:47.606182308 +0000 UTC m=+1.835171186 container remove 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:03:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:03:47 np0005589310 systemd[1]: libpod-conmon-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope: Deactivated successfully.
Jan 20 14:03:47 np0005589310 python3[90161]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:47 np0005589310 podman[90163]: 2026-01-20 19:03:47.974486771 +0000 UTC m=+0.067448148 container create 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:48 np0005589310 systemd[1]: Started libpod-conmon-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope.
Jan 20 14:03:48 np0005589310 podman[90163]: 2026-01-20 19:03:47.957430535 +0000 UTC m=+0.050391942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 podman[90163]: 2026-01-20 19:03:48.077451753 +0000 UTC m=+0.170413140 container init 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:03:48 np0005589310 podman[90163]: 2026-01-20 19:03:48.090146115 +0000 UTC m=+0.183107502 container start 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:03:48 np0005589310 podman[90163]: 2026-01-20 19:03:48.094084179 +0000 UTC m=+0.187045596 container attach 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.128585392 +0000 UTC m=+0.080641963 container create 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:48 np0005589310 systemd[1]: Started libpod-conmon-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope.
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.098208028 +0000 UTC m=+0.050264709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:48 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.208580687 +0000 UTC m=+0.160637258 container init 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.215188025 +0000 UTC m=+0.167244596 container start 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 14:03:48 np0005589310 quizzical_roentgen[90210]: 167 167
Jan 20 14:03:48 np0005589310 systemd[1]: libpod-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope: Deactivated successfully.
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.22633653 +0000 UTC m=+0.178393101 container attach 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.22674888 +0000 UTC m=+0.178805471 container died 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:03:48 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2a0c478b5f542f5b68cb4021cbc11c7f9f630c8afa58c8ab7adc98960103d537-merged.mount: Deactivated successfully.
Jan 20 14:03:48 np0005589310 podman[90192]: 2026-01-20 19:03:48.281715339 +0000 UTC m=+0.233771910 container remove 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:48 np0005589310 systemd[1]: libpod-conmon-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope: Deactivated successfully.
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:48 np0005589310 podman[90250]: 2026-01-20 19:03:48.499302182 +0000 UTC m=+0.085677761 container create 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 14:03:48 np0005589310 podman[90250]: 2026-01-20 19:03:48.45426319 +0000 UTC m=+0.040638839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:48 np0005589310 systemd[1]: Started libpod-conmon-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope.
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 21.849 iops: 5593.326 elapsed_sec: 0.536
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: log_channel(cluster) log [WRN] : OSD bench result of 5593.325970 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 0 waiting for initial osdmap
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:48 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:48.601+0000 7f48d4287640 -1 osd.2 0 waiting for initial osdmap
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 14:03:48 np0005589310 podman[90250]: 2026-01-20 19:03:48.6339798 +0000 UTC m=+0.220355409 container init 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:48 np0005589310 podman[90250]: 2026-01-20 19:03:48.646516239 +0000 UTC m=+0.232891818 container start 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:48 np0005589310 podman[90250]: 2026-01-20 19:03:48.650453093 +0000 UTC m=+0.236828682 container attach 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 set_numa_affinity not setting numa affinity
Jan 20 14:03:48 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:48.650+0000 7f48cf08c640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 14:03:48 np0005589310 ceph-osd[88112]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 20 14:03:49 np0005589310 pedantic_ishizaka[90267]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:03:49 np0005589310 pedantic_ishizaka[90267]: --> All data devices are unavailable
Jan 20 14:03:49 np0005589310 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:49 np0005589310 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 podman[90250]: 2026-01-20 19:03:49.216399674 +0000 UTC m=+0.802775263 container died 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422-merged.mount: Deactivated successfully.
Jan 20 14:03:49 np0005589310 podman[90250]: 2026-01-20 19:03:49.291199455 +0000 UTC m=+0.877575034 container remove 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-conmon-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 20 14:03:49 np0005589310 recursing_banach[90190]: pool 'backups' created
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462] boot
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 20 14:03:49 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:49 np0005589310 ceph-osd[88112]: osd.2 19 state: booting -> active
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 14:03:49 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[16,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:49 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 podman[90163]: 2026-01-20 19:03:49.440580864 +0000 UTC m=+1.533542241 container died 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:03:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2-merged.mount: Deactivated successfully.
Jan 20 14:03:49 np0005589310 podman[90163]: 2026-01-20 19:03:49.485049204 +0000 UTC m=+1.578010591 container remove 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-conmon-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v49: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.807489134 +0000 UTC m=+0.059910528 container create a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:49 np0005589310 python3[90389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:49 np0005589310 systemd[1]: Started libpod-conmon-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope.
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.775955413 +0000 UTC m=+0.028376907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.881492707 +0000 UTC m=+0.133914121 container init a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.89001534 +0000 UTC m=+0.142436724 container start a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.893685048 +0000 UTC m=+0.146106432 container attach a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:49 np0005589310 gifted_dijkstra[90425]: 167 167
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 conmon[90425]: conmon a4d88a2ae5164763967a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope/container/memory.events
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.896121595 +0000 UTC m=+0.148542979 container died a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b65f4e05a1993f24000be98da48f480654b5d3397fad67454b1a61107763cbcb-merged.mount: Deactivated successfully.
Jan 20 14:03:49 np0005589310 podman[90416]: 2026-01-20 19:03:49.935944344 +0000 UTC m=+0.091023699 container create 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:49 np0005589310 podman[90402]: 2026-01-20 19:03:49.945831939 +0000 UTC m=+0.198253323 container remove a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:49 np0005589310 systemd[1]: libpod-conmon-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope: Deactivated successfully.
Jan 20 14:03:49 np0005589310 systemd[1]: Started libpod-conmon-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope.
Jan 20 14:03:49 np0005589310 podman[90416]: 2026-01-20 19:03:49.886962657 +0000 UTC m=+0.042042092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 podman[90416]: 2026-01-20 19:03:50.035983097 +0000 UTC m=+0.191062672 container init 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:03:50 np0005589310 podman[90416]: 2026-01-20 19:03:50.044500839 +0000 UTC m=+0.199580184 container start 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:50 np0005589310 podman[90416]: 2026-01-20 19:03:50.048037764 +0000 UTC m=+0.203117149 container attach 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.092605166 +0000 UTC m=+0.041147352 container create b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:50 np0005589310 systemd[1]: Started libpod-conmon-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope.
Jan 20 14:03:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.165725697 +0000 UTC m=+0.114267893 container init b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.073864949 +0000 UTC m=+0.022407135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.175331636 +0000 UTC m=+0.123873812 container start b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.179293381 +0000 UTC m=+0.127835587 container attach b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 20 14:03:50 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:50 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[16,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: OSD bench result of 5593.325970 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462] boot
Jan 20 14:03:50 np0005589310 keen_curran[90482]: {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    "0": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "devices": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "/dev/loop3"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            ],
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_name": "ceph_lv0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_size": "21470642176",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "name": "ceph_lv0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "tags": {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.crush_device_class": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.encrypted": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_id": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.vdo": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.with_tpm": "0"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            },
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "vg_name": "ceph_vg0"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        }
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    ],
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    "1": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "devices": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "/dev/loop4"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            ],
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_name": "ceph_lv1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_size": "21470642176",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "name": "ceph_lv1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "tags": {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.crush_device_class": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.encrypted": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_id": "1",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.vdo": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.with_tpm": "0"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            },
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "vg_name": "ceph_vg1"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        }
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    ],
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    "2": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "devices": [
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "/dev/loop5"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            ],
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_name": "ceph_lv2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_size": "21470642176",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "name": "ceph_lv2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "tags": {
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.cluster_name": "ceph",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.crush_device_class": "",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.encrypted": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.objectstore": "bluestore",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osd_id": "2",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.vdo": "0",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:                "ceph.with_tpm": "0"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            },
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "type": "block",
Jan 20 14:03:50 np0005589310 keen_curran[90482]:            "vg_name": "ceph_vg2"
Jan 20 14:03:50 np0005589310 keen_curran[90482]:        }
Jan 20 14:03:50 np0005589310 keen_curran[90482]:    ]
Jan 20 14:03:50 np0005589310 keen_curran[90482]: }
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:50 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:50 np0005589310 systemd[1]: libpod-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope: Deactivated successfully.
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.496773603 +0000 UTC m=+0.445315779 container died b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:03:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613-merged.mount: Deactivated successfully.
Jan 20 14:03:50 np0005589310 podman[90465]: 2026-01-20 19:03:50.543345323 +0000 UTC m=+0.491887499 container remove b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:50 np0005589310 systemd[1]: libpod-conmon-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope: Deactivated successfully.
Jan 20 14:03:50 np0005589310 podman[90587]: 2026-01-20 19:03:50.960824527 +0000 UTC m=+0.037402002 container create 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:03:50 np0005589310 systemd[1]: Started libpod-conmon-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope.
Jan 20 14:03:51 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:51.030467216 +0000 UTC m=+0.107044711 container init 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:51.036350436 +0000 UTC m=+0.112927911 container start 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:51.039983312 +0000 UTC m=+0.116560787 container attach 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:50.943590476 +0000 UTC m=+0.020167971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:51 np0005589310 festive_hellman[90603]: 167 167
Jan 20 14:03:51 np0005589310 systemd[1]: libpod-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope: Deactivated successfully.
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:51.042082903 +0000 UTC m=+0.118660398 container died 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 14:03:51 np0005589310 systemd[1]: var-lib-containers-storage-overlay-59d291de53a878af06775c535a2c32f9b940f522d27670283ea8aacb8b08a96f-merged.mount: Deactivated successfully.
Jan 20 14:03:51 np0005589310 podman[90587]: 2026-01-20 19:03:51.078766836 +0000 UTC m=+0.155344311 container remove 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:51 np0005589310 systemd[1]: libpod-conmon-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope: Deactivated successfully.
Jan 20 14:03:51 np0005589310 podman[90628]: 2026-01-20 19:03:51.232297484 +0000 UTC m=+0.044774908 container create 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:03:51 np0005589310 systemd[1]: Started libpod-conmon-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope.
Jan 20 14:03:51 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 podman[90628]: 2026-01-20 19:03:51.216077847 +0000 UTC m=+0.028555301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 podman[90628]: 2026-01-20 19:03:51.322153544 +0000 UTC m=+0.134630988 container init 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:03:51 np0005589310 podman[90628]: 2026-01-20 19:03:51.328510705 +0000 UTC m=+0.140988129 container start 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:51 np0005589310 podman[90628]: 2026-01-20 19:03:51.332177083 +0000 UTC m=+0.144654507 container attach 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 20 14:03:51 np0005589310 determined_cori[90456]: pool 'images' created
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 20 14:03:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:51 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:51 np0005589310 podman[90416]: 2026-01-20 19:03:51.437693986 +0000 UTC m=+1.592773341 container died 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:03:51 np0005589310 systemd[1]: libpod-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope: Deactivated successfully.
Jan 20 14:03:51 np0005589310 systemd[1]: var-lib-containers-storage-overlay-47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed-merged.mount: Deactivated successfully.
Jan 20 14:03:51 np0005589310 podman[90416]: 2026-01-20 19:03:51.47939808 +0000 UTC m=+1.634477445 container remove 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:03:51 np0005589310 systemd[1]: libpod-conmon-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope: Deactivated successfully.
Jan 20 14:03:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v52: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:03:51 np0005589310 python3[90698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:51 np0005589310 podman[90726]: 2026-01-20 19:03:51.853785168 +0000 UTC m=+0.049953051 container create 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:51 np0005589310 systemd[1]: Started libpod-conmon-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope.
Jan 20 14:03:51 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:51 np0005589310 podman[90726]: 2026-01-20 19:03:51.833809181 +0000 UTC m=+0.029977084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:51 np0005589310 podman[90726]: 2026-01-20 19:03:51.933575529 +0000 UTC m=+0.129743442 container init 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:03:51 np0005589310 podman[90726]: 2026-01-20 19:03:51.93997136 +0000 UTC m=+0.136139243 container start 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:51 np0005589310 podman[90726]: 2026-01-20 19:03:51.943899294 +0000 UTC m=+0.140067177 container attach 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 14:03:52 np0005589310 lvm[90778]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:03:52 np0005589310 lvm[90775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:03:52 np0005589310 lvm[90778]: VG ceph_vg1 finished
Jan 20 14:03:52 np0005589310 lvm[90775]: VG ceph_vg0 finished
Jan 20 14:03:52 np0005589310 lvm[90782]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:03:52 np0005589310 lvm[90782]: VG ceph_vg2 finished
Jan 20 14:03:52 np0005589310 distracted_dirac[90644]: {}
Jan 20 14:03:52 np0005589310 systemd[1]: libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Deactivated successfully.
Jan 20 14:03:52 np0005589310 systemd[1]: libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Consumed 1.368s CPU time.
Jan 20 14:03:52 np0005589310 conmon[90644]: conmon 7968c5f89e5635d6ca58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope/container/memory.events
Jan 20 14:03:52 np0005589310 podman[90628]: 2026-01-20 19:03:52.187528618 +0000 UTC m=+1.000006042 container died 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:03:52 np0005589310 systemd[1]: var-lib-containers-storage-overlay-316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823-merged.mount: Deactivated successfully.
Jan 20 14:03:52 np0005589310 podman[90628]: 2026-01-20 19:03:52.240490249 +0000 UTC m=+1.052967673 container remove 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 14:03:52 np0005589310 systemd[1]: libpod-conmon-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Deactivated successfully.
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 20 14:03:52 np0005589310 awesome_cannon[90762]: pool 'cephfs.cephfs.meta' created
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 20 14:03:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:52 np0005589310 systemd[1]: libpod-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope: Deactivated successfully.
Jan 20 14:03:52 np0005589310 podman[90726]: 2026-01-20 19:03:52.446100617 +0000 UTC m=+0.642268500 container died 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:52 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:52 np0005589310 systemd[1]: var-lib-containers-storage-overlay-51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2-merged.mount: Deactivated successfully.
Jan 20 14:03:52 np0005589310 podman[90726]: 2026-01-20 19:03:52.482813012 +0000 UTC m=+0.678980895 container remove 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:03:52 np0005589310 systemd[1]: libpod-conmon-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope: Deactivated successfully.
Jan 20 14:03:52 np0005589310 python3[90881]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:52 np0005589310 podman[90882]: 2026-01-20 19:03:52.853168153 +0000 UTC m=+0.057508341 container create 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:52 np0005589310 systemd[1]: Started libpod-conmon-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope.
Jan 20 14:03:52 np0005589310 podman[90882]: 2026-01-20 19:03:52.823550228 +0000 UTC m=+0.027890506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:52 np0005589310 podman[90882]: 2026-01-20 19:03:52.942793478 +0000 UTC m=+0.147133676 container init 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:03:52 np0005589310 podman[90882]: 2026-01-20 19:03:52.949943528 +0000 UTC m=+0.154283726 container start 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:52 np0005589310 podman[90882]: 2026-01-20 19:03:52.953967284 +0000 UTC m=+0.158307492 container attach 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 20 14:03:53 np0005589310 loving_germain[90897]: pool 'cephfs.cephfs.data' created
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 20 14:03:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 14:03:53 np0005589310 systemd[1]: libpod-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope: Deactivated successfully.
Jan 20 14:03:53 np0005589310 podman[90882]: 2026-01-20 19:03:53.458690817 +0000 UTC m=+0.663031045 container died 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:03:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:53 np0005589310 systemd[1]: var-lib-containers-storage-overlay-07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b-merged.mount: Deactivated successfully.
Jan 20 14:03:53 np0005589310 podman[90882]: 2026-01-20 19:03:53.499902869 +0000 UTC m=+0.704243057 container remove 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:03:53 np0005589310 systemd[1]: libpod-conmon-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope: Deactivated successfully.
Jan 20 14:03:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 3 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:03:53 np0005589310 python3[90960]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:53 np0005589310 podman[90961]: 2026-01-20 19:03:53.889346745 +0000 UTC m=+0.044611653 container create 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:03:53 np0005589310 systemd[1]: Started libpod-conmon-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope.
Jan 20 14:03:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:53 np0005589310 podman[90961]: 2026-01-20 19:03:53.869794229 +0000 UTC m=+0.025059187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:53 np0005589310 podman[90961]: 2026-01-20 19:03:53.966589876 +0000 UTC m=+0.121854804 container init 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:03:53 np0005589310 podman[90961]: 2026-01-20 19:03:53.971138664 +0000 UTC m=+0.126403572 container start 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:03:53 np0005589310 podman[90961]: 2026-01-20 19:03:53.974100395 +0000 UTC m=+0.129365303 container attach 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 20 14:03:54 np0005589310 hardcore_matsumoto[90976]: enabled application 'rbd' on pool 'vms'
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 20 14:03:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:03:54 np0005589310 systemd[1]: libpod-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope: Deactivated successfully.
Jan 20 14:03:54 np0005589310 podman[90961]: 2026-01-20 19:03:54.458267628 +0000 UTC m=+0.613532536 container died 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 20 14:03:54 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 14:03:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7-merged.mount: Deactivated successfully.
Jan 20 14:03:54 np0005589310 podman[90961]: 2026-01-20 19:03:54.498868095 +0000 UTC m=+0.654133003 container remove 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:54 np0005589310 systemd[1]: libpod-conmon-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope: Deactivated successfully.
Jan 20 14:03:54 np0005589310 python3[91037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:54 np0005589310 podman[91038]: 2026-01-20 19:03:54.850022729 +0000 UTC m=+0.051975429 container create 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:03:54 np0005589310 systemd[1]: Started libpod-conmon-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope.
Jan 20 14:03:54 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:54 np0005589310 podman[91038]: 2026-01-20 19:03:54.920912248 +0000 UTC m=+0.122864968 container init 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:54 np0005589310 podman[91038]: 2026-01-20 19:03:54.827712348 +0000 UTC m=+0.029665098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:54 np0005589310 podman[91038]: 2026-01-20 19:03:54.927338161 +0000 UTC m=+0.129290901 container start 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:03:54 np0005589310 podman[91038]: 2026-01-20 19:03:54.931986821 +0000 UTC m=+0.133939551 container attach 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 20 14:03:55 np0005589310 sad_liskov[91054]: enabled application 'rbd' on pool 'volumes'
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 20 14:03:55 np0005589310 systemd[1]: libpod-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope: Deactivated successfully.
Jan 20 14:03:55 np0005589310 conmon[91054]: conmon 66af91f94602c1a678ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope/container/memory.events
Jan 20 14:03:55 np0005589310 podman[91038]: 2026-01-20 19:03:55.466972235 +0000 UTC m=+0.668924985 container died 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 20 14:03:55 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 14:03:55 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe-merged.mount: Deactivated successfully.
Jan 20 14:03:55 np0005589310 podman[91038]: 2026-01-20 19:03:55.511086046 +0000 UTC m=+0.713038746 container remove 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:03:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:03:55 np0005589310 systemd[1]: libpod-conmon-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope: Deactivated successfully.
Jan 20 14:03:55 np0005589310 python3[91115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:55 np0005589310 podman[91116]: 2026-01-20 19:03:55.900510562 +0000 UTC m=+0.091649874 container create 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:55 np0005589310 podman[91116]: 2026-01-20 19:03:55.834282754 +0000 UTC m=+0.025422106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:55 np0005589310 systemd[1]: Started libpod-conmon-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope.
Jan 20 14:03:56 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:56 np0005589310 podman[91116]: 2026-01-20 19:03:56.272942234 +0000 UTC m=+0.464081556 container init 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 14:03:56 np0005589310 podman[91116]: 2026-01-20 19:03:56.279941931 +0000 UTC m=+0.471081233 container start 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:03:56 np0005589310 podman[91116]: 2026-01-20 19:03:56.283909556 +0000 UTC m=+0.475048878 container attach 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 20 14:03:56 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 20 14:03:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 20 14:03:57 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 20 14:03:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 14:03:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 20 14:03:57 np0005589310 jovial_hodgkin[91131]: enabled application 'rbd' on pool 'backups'
Jan 20 14:03:57 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 20 14:03:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:03:57 np0005589310 systemd[1]: libpod-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope: Deactivated successfully.
Jan 20 14:03:57 np0005589310 podman[91116]: 2026-01-20 19:03:57.535780506 +0000 UTC m=+1.726919808 container died 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:57 np0005589310 systemd[1]: var-lib-containers-storage-overlay-cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375-merged.mount: Deactivated successfully.
Jan 20 14:03:57 np0005589310 podman[91116]: 2026-01-20 19:03:57.579124919 +0000 UTC m=+1.770264221 container remove 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:57 np0005589310 systemd[1]: libpod-conmon-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope: Deactivated successfully.
Jan 20 14:03:57 np0005589310 python3[91194]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:57 np0005589310 podman[91195]: 2026-01-20 19:03:57.965634996 +0000 UTC m=+0.047821510 container create 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:03:57 np0005589310 systemd[1]: Started libpod-conmon-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope.
Jan 20 14:03:58 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:58 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:58 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:58.028963144 +0000 UTC m=+0.111149668 container init 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:57.940913996 +0000 UTC m=+0.023100530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:58.036616466 +0000 UTC m=+0.118802980 container start 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:58.039770002 +0000 UTC m=+0.121956546 container attach 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 20 14:03:58 np0005589310 heuristic_hopper[91211]: enabled application 'rbd' on pool 'images'
Jan 20 14:03:58 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 20 14:03:58 np0005589310 systemd[1]: libpod-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope: Deactivated successfully.
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:58.536120294 +0000 UTC m=+0.618306848 container died 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:03:58 np0005589310 systemd[1]: var-lib-containers-storage-overlay-74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016-merged.mount: Deactivated successfully.
Jan 20 14:03:58 np0005589310 podman[91195]: 2026-01-20 19:03:58.585665555 +0000 UTC m=+0.667852079 container remove 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:03:58 np0005589310 systemd[1]: libpod-conmon-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope: Deactivated successfully.
Jan 20 14:03:58 np0005589310 python3[91275]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:58 np0005589310 podman[91276]: 2026-01-20 19:03:58.913178386 +0000 UTC m=+0.050330639 container create 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:03:58 np0005589310 systemd[1]: Started libpod-conmon-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope.
Jan 20 14:03:58 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:58 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:58 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:58 np0005589310 podman[91276]: 2026-01-20 19:03:58.887425853 +0000 UTC m=+0.024578126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:03:58 np0005589310 podman[91276]: 2026-01-20 19:03:58.986551054 +0000 UTC m=+0.123703327 container init 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:58 np0005589310 podman[91276]: 2026-01-20 19:03:58.991493762 +0000 UTC m=+0.128646015 container start 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:03:58 np0005589310 podman[91276]: 2026-01-20 19:03:58.994653738 +0000 UTC m=+0.131805991 container attach 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 20 14:03:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 20 14:03:59 np0005589310 ecstatic_chebyshev[91292]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 20 14:03:59 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 20 14:03:59 np0005589310 systemd[1]: libpod-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope: Deactivated successfully.
Jan 20 14:03:59 np0005589310 conmon[91292]: conmon 0aee58d34afa3691208d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope/container/memory.events
Jan 20 14:03:59 np0005589310 podman[91276]: 2026-01-20 19:03:59.552078426 +0000 UTC m=+0.689230679 container died 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:59 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f-merged.mount: Deactivated successfully.
Jan 20 14:03:59 np0005589310 podman[91276]: 2026-01-20 19:03:59.598175614 +0000 UTC m=+0.735327867 container remove 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:03:59 np0005589310 systemd[1]: libpod-conmon-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope: Deactivated successfully.
Jan 20 14:03:59 np0005589310 python3[91353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:03:59 np0005589310 podman[91354]: 2026-01-20 19:03:59.930150051 +0000 UTC m=+0.039052571 container create 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:03:59 np0005589310 systemd[1]: Started libpod-conmon-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope.
Jan 20 14:03:59 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:03:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:03:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:04:00.003695793 +0000 UTC m=+0.112598333 container init 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:04:00.008883267 +0000 UTC m=+0.117785827 container start 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:03:59.913564546 +0000 UTC m=+0.022467096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:04:00.013300482 +0000 UTC m=+0.122203022 container attach 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 20 14:04:00 np0005589310 quirky_wing[91369]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 14:04:00 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 20 14:04:00 np0005589310 systemd[1]: libpod-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope: Deactivated successfully.
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:04:00.57491687 +0000 UTC m=+0.683819390 container died 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:04:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1-merged.mount: Deactivated successfully.
Jan 20 14:04:00 np0005589310 podman[91354]: 2026-01-20 19:04:00.620555857 +0000 UTC m=+0.729458387 container remove 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:04:00 np0005589310 systemd[1]: libpod-conmon-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope: Deactivated successfully.
Jan 20 14:04:01 np0005589310 python3[91481]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:04:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:01 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 14:04:01 np0005589310 python3[91552]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935841.2559671-36589-95521937731709/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:02 np0005589310 python3[91654]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:04:02 np0005589310 python3[91729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935842.1561182-36603-218795273964965/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=6e4615d43abe95e636d62123fc987968919dda9e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:03 np0005589310 python3[91779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.30323791 +0000 UTC m=+0.053048915 container create b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:04:03 np0005589310 systemd[1]: Started libpod-conmon-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope.
Jan 20 14:04:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.283657574 +0000 UTC m=+0.033468579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.38762825 +0000 UTC m=+0.137439265 container init b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.394430592 +0000 UTC m=+0.144241577 container start b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.398196642 +0000 UTC m=+0.148007627 container attach b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 14:04:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:04:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 14:04:03 np0005589310 modest_kapitsa[91795]: 
Jan 20 14:04:03 np0005589310 modest_kapitsa[91795]: [global]
Jan 20 14:04:03 np0005589310 modest_kapitsa[91795]: #011fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 14:04:03 np0005589310 modest_kapitsa[91795]: #011mon_host = 192.168.122.100
Jan 20 14:04:03 np0005589310 modest_kapitsa[91795]: #011rgw_keystone_api_version = 3
Jan 20 14:04:03 np0005589310 systemd[1]: libpod-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope: Deactivated successfully.
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.842751941 +0000 UTC m=+0.592562926 container died b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:03 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517-merged.mount: Deactivated successfully.
Jan 20 14:04:03 np0005589310 podman[91780]: 2026-01-20 19:04:03.880609873 +0000 UTC m=+0.630420858 container remove b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:03 np0005589310 systemd[1]: libpod-conmon-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope: Deactivated successfully.
Jan 20 14:04:04 np0005589310 python3[91907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.276959664 +0000 UTC m=+0.050149375 container create 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:04:04 np0005589310 systemd[1]: Started libpod-conmon-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope.
Jan 20 14:04:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.252152544 +0000 UTC m=+0.025342285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.355979137 +0000 UTC m=+0.129168858 container init 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.362038881 +0000 UTC m=+0.135228582 container start 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.366568549 +0000 UTC m=+0.139758270 container attach 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:04:04 np0005589310 podman[91962]: 2026-01-20 19:04:04.391539704 +0000 UTC m=+0.071812502 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:04 np0005589310 podman[91962]: 2026-01-20 19:04:04.513458398 +0000 UTC m=+0.193731226 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:04:04 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 14:04:04 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 14:04:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 20 14:04:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3688985674' entity='client.admin' 
Jan 20 14:04:04 np0005589310 admiring_liskov[91964]: set ssl_option
Jan 20 14:04:04 np0005589310 systemd[1]: libpod-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope: Deactivated successfully.
Jan 20 14:04:04 np0005589310 conmon[91964]: conmon 995faee01cb89e1bf260 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope/container/memory.events
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.94086434 +0000 UTC m=+0.714054051 container died 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:04:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc-merged.mount: Deactivated successfully.
Jan 20 14:04:04 np0005589310 podman[91931]: 2026-01-20 19:04:04.978249399 +0000 UTC m=+0.751439100 container remove 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:04:04 np0005589310 systemd[1]: libpod-conmon-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope: Deactivated successfully.
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:05 np0005589310 python3[92147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.382881978 +0000 UTC m=+0.043546288 container create b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:05 np0005589310 systemd[1]: Started libpod-conmon-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope.
Jan 20 14:04:05 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.365409132 +0000 UTC m=+0.026073462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.470480795 +0000 UTC m=+0.131145115 container init b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.478840694 +0000 UTC m=+0.139504994 container start b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.482811529 +0000 UTC m=+0.143475839 container attach b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:04:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3688985674' entity='client.admin' 
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:05 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:04:05 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:05 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 14:04:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:05 np0005589310 youthful_faraday[92222]: Scheduled rgw.rgw update...
Jan 20 14:04:05 np0005589310 systemd[1]: libpod-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope: Deactivated successfully.
Jan 20 14:04:05 np0005589310 podman[92173]: 2026-01-20 19:04:05.95178565 +0000 UTC m=+0.612449960 container died b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:05 np0005589310 systemd[1]: var-lib-containers-storage-overlay-15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b-merged.mount: Deactivated successfully.
Jan 20 14:04:06 np0005589310 podman[92173]: 2026-01-20 19:04:06.002754304 +0000 UTC m=+0.663418634 container remove b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:06 np0005589310 systemd[1]: libpod-conmon-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope: Deactivated successfully.
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.515972379 +0000 UTC m=+0.049497690 container create 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:06 np0005589310 systemd[1]: Started libpod-conmon-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope.
Jan 20 14:04:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.495318537 +0000 UTC m=+0.028843888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.60544798 +0000 UTC m=+0.138973311 container init 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.611536136 +0000 UTC m=+0.145061447 container start 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.615990732 +0000 UTC m=+0.149516063 container attach 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:06 np0005589310 hopeful_curran[92385]: 167 167
Jan 20 14:04:06 np0005589310 systemd[1]: libpod-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope: Deactivated successfully.
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.61967381 +0000 UTC m=+0.153199121 container died 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-5b99b9a6c8f115c6cd1482734075d3778795515029ed16e52dd97eac82f93084-merged.mount: Deactivated successfully.
Jan 20 14:04:06 np0005589310 podman[92369]: 2026-01-20 19:04:06.66461223 +0000 UTC m=+0.198137581 container remove 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:04:06 np0005589310 systemd[1]: libpod-conmon-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope: Deactivated successfully.
Jan 20 14:04:06 np0005589310 podman[92460]: 2026-01-20 19:04:06.83337555 +0000 UTC m=+0.042753439 container create 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:04:06 np0005589310 systemd[1]: Started libpod-conmon-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope.
Jan 20 14:04:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:06 np0005589310 podman[92460]: 2026-01-20 19:04:06.812500472 +0000 UTC m=+0.021878391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:06 np0005589310 podman[92460]: 2026-01-20 19:04:06.910307082 +0000 UTC m=+0.119684971 container init 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:06 np0005589310 podman[92460]: 2026-01-20 19:04:06.916982481 +0000 UTC m=+0.126360370 container start 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:06 np0005589310 podman[92460]: 2026-01-20 19:04:06.920642879 +0000 UTC m=+0.130020788 container attach 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:06 np0005589310 python3[92497]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:04:07 np0005589310 python3[92577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935846.6988952-36644-100073787657742/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:07 np0005589310 xenodochial_goldberg[92500]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:04:07 np0005589310 xenodochial_goldberg[92500]: --> All data devices are unavailable
Jan 20 14:04:07 np0005589310 systemd[1]: libpod-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope: Deactivated successfully.
Jan 20 14:04:07 np0005589310 podman[92460]: 2026-01-20 19:04:07.436398714 +0000 UTC m=+0.645776633 container died 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:07 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59-merged.mount: Deactivated successfully.
Jan 20 14:04:07 np0005589310 podman[92460]: 2026-01-20 19:04:07.484435068 +0000 UTC m=+0.693812947 container remove 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:07 np0005589310 systemd[1]: libpod-conmon-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope: Deactivated successfully.
Jan 20 14:04:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:07 np0005589310 python3[92704]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:07 np0005589310 podman[92705]: 2026-01-20 19:04:07.89759438 +0000 UTC m=+0.048517186 container create 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:07 np0005589310 systemd[1]: Started libpod-conmon-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope.
Jan 20 14:04:07 np0005589310 podman[92705]: 2026-01-20 19:04:07.875237467 +0000 UTC m=+0.026160303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:07 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:07 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:07 np0005589310 podman[92732]: 2026-01-20 19:04:07.986863776 +0000 UTC m=+0.046077908 container create 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:08 np0005589310 podman[92705]: 2026-01-20 19:04:08.000697756 +0000 UTC m=+0.151620562 container init 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:08 np0005589310 podman[92705]: 2026-01-20 19:04:08.016339949 +0000 UTC m=+0.167262755 container start 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:08 np0005589310 podman[92705]: 2026-01-20 19:04:08.01975052 +0000 UTC m=+0.170673416 container attach 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:04:08 np0005589310 systemd[1]: Started libpod-conmon-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope.
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:07.966922632 +0000 UTC m=+0.026136804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:08 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:08.085000994 +0000 UTC m=+0.144215166 container init 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:08.095383772 +0000 UTC m=+0.154597924 container start 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:08 np0005589310 flamboyant_wiles[92753]: 167 167
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:08.099331875 +0000 UTC m=+0.158546007 container attach 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:08.101305743 +0000 UTC m=+0.160519915 container died 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f1f8883fc97305fe402c747789f2194c5951f0ae8f59c6dc688a13bbd5bcb8ce-merged.mount: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92732]: 2026-01-20 19:04:08.154744666 +0000 UTC m=+0.213958828 container remove 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-conmon-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.347814525 +0000 UTC m=+0.051000827 container create 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:04:08 np0005589310 systemd[1]: Started libpod-conmon-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope.
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.327941271 +0000 UTC m=+0.031127593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:08 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.480922555 +0000 UTC m=+0.184108857 container init 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.490585006 +0000 UTC m=+0.193771308 container start 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.494573261 +0000 UTC m=+0.197759573 container attach 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:08 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:04:08 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 14:04:08 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[75116]: 2026-01-20T19:04:08.498+0000 7f03ac18a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e2 new map
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-20T19:04:08:498809+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-20T19:04:08.498557+0000#012modified#0112026-01-20T19:04:08.498557+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 20 14:04:08 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:08 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:08 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92705]: 2026-01-20 19:04:08.541457417 +0000 UTC m=+0.692380223 container died 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e-merged.mount: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92705]: 2026-01-20 19:04:08.588541609 +0000 UTC m=+0.739464425 container remove 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-conmon-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 kind_lewin[92811]: {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    "0": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "devices": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "/dev/loop3"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            ],
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_name": "ceph_lv0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_size": "21470642176",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "name": "ceph_lv0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "tags": {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.crush_device_class": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.encrypted": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_id": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.vdo": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.with_tpm": "0"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            },
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "vg_name": "ceph_vg0"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        }
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    ],
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    "1": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "devices": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "/dev/loop4"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            ],
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_name": "ceph_lv1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_size": "21470642176",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "name": "ceph_lv1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "tags": {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.crush_device_class": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.encrypted": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_id": "1",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.vdo": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.with_tpm": "0"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            },
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "vg_name": "ceph_vg1"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        }
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    ],
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    "2": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "devices": [
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "/dev/loop5"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            ],
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_name": "ceph_lv2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_size": "21470642176",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "name": "ceph_lv2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "tags": {
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.crush_device_class": "",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.encrypted": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osd_id": "2",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.vdo": "0",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:                "ceph.with_tpm": "0"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            },
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "type": "block",
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:            "vg_name": "ceph_vg2"
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:        }
Jan 20 14:04:08 np0005589310 kind_lewin[92811]:    ]
Jan 20 14:04:08 np0005589310 kind_lewin[92811]: }
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.881384774 +0000 UTC m=+0.584571076 container died 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7-merged.mount: Deactivated successfully.
Jan 20 14:04:08 np0005589310 podman[92795]: 2026-01-20 19:04:08.945776328 +0000 UTC m=+0.648962630 container remove 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:08 np0005589310 systemd[1]: libpod-conmon-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope: Deactivated successfully.
Jan 20 14:04:08 np0005589310 python3[92858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.053606777 +0000 UTC m=+0.056330673 container create d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:04:09 np0005589310 systemd[1]: Started libpod-conmon-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope.
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.029923943 +0000 UTC m=+0.032647839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.168815811 +0000 UTC m=+0.171539717 container init d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.175709675 +0000 UTC m=+0.178433561 container start d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.179375173 +0000 UTC m=+0.182099059 container attach d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.514942896 +0000 UTC m=+0.056012516 container create a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:09 np0005589310 systemd[1]: Started libpod-conmon-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope.
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.488384813 +0000 UTC m=+0.029454443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.624636619 +0000 UTC m=+0.165706219 container init a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:04:09 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:04:09 np0005589310 ceph-mgr[75417]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:09 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.632307572 +0000 UTC m=+0.173377182 container start a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.638994411 +0000 UTC m=+0.180064011 container attach a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:04:09 np0005589310 pedantic_lamport[92985]: 167 167
Jan 20 14:04:09 np0005589310 crazy_stonebraker[92910]: Scheduled mds.cephfs update...
Jan 20 14:04:09 np0005589310 systemd[1]: libpod-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope: Deactivated successfully.
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.641565062 +0000 UTC m=+0.182634742 container died a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:09 np0005589310 systemd[1]: libpod-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope: Deactivated successfully.
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.670094652 +0000 UTC m=+0.672818538 container died d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:04:09 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e0eae0b264fdcc54bf8c8388619e14aa7beaf55195d423ebc54c728ad15fd682-merged.mount: Deactivated successfully.
Jan 20 14:04:09 np0005589310 podman[92969]: 2026-01-20 19:04:09.72878275 +0000 UTC m=+0.269852330 container remove a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:09 np0005589310 systemd[1]: libpod-conmon-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope: Deactivated successfully.
Jan 20 14:04:09 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f-merged.mount: Deactivated successfully.
Jan 20 14:04:09 np0005589310 podman[92872]: 2026-01-20 19:04:09.775139164 +0000 UTC m=+0.777863050 container remove d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:09 np0005589310 systemd[1]: libpod-conmon-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope: Deactivated successfully.
Jan 20 14:04:09 np0005589310 podman[93023]: 2026-01-20 19:04:09.916731807 +0000 UTC m=+0.045921725 container create 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:09 np0005589310 systemd[1]: Started libpod-conmon-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope.
Jan 20 14:04:09 np0005589310 podman[93023]: 2026-01-20 19:04:09.895278876 +0000 UTC m=+0.024468824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:10 np0005589310 podman[93023]: 2026-01-20 19:04:10.014115597 +0000 UTC m=+0.143305545 container init 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:04:10 np0005589310 podman[93023]: 2026-01-20 19:04:10.026667816 +0000 UTC m=+0.155857724 container start 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:04:10 np0005589310 podman[93023]: 2026-01-20 19:04:10.030483727 +0000 UTC m=+0.159673685 container attach 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:04:10 np0005589310 ceph-mon[75120]: Saving service mds.cephfs spec with placement compute-0
Jan 20 14:04:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:10 np0005589310 lvm[93191]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:10 np0005589310 lvm[93191]: VG ceph_vg0 finished
Jan 20 14:04:10 np0005589310 lvm[93195]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:10 np0005589310 lvm[93195]: VG ceph_vg1 finished
Jan 20 14:04:10 np0005589310 lvm[93198]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:04:10 np0005589310 lvm[93198]: VG ceph_vg2 finished
Jan 20 14:04:10 np0005589310 zealous_dewdney[93039]: {}
Jan 20 14:04:10 np0005589310 python3[93196]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 14:04:11 np0005589310 systemd[1]: libpod-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Deactivated successfully.
Jan 20 14:04:11 np0005589310 systemd[1]: libpod-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Consumed 1.537s CPU time.
Jan 20 14:04:11 np0005589310 podman[93023]: 2026-01-20 19:04:11.011628348 +0000 UTC m=+1.140818296 container died 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:04:11 np0005589310 systemd[1]: var-lib-containers-storage-overlay-bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c-merged.mount: Deactivated successfully.
Jan 20 14:04:11 np0005589310 podman[93023]: 2026-01-20 19:04:11.058568506 +0000 UTC m=+1.187758414 container remove 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 14:04:11 np0005589310 systemd[1]: libpod-conmon-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Deactivated successfully.
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:11 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1))
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:11 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 14:04:11 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 14:04:11 np0005589310 python3[93308]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935850.6741452-36696-191857578290303/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=82f4fc7876a2f5ec58c3b05a59c81182fa299df3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:04:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.731980207 +0000 UTC m=+0.048016394 container create cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 20 14:04:11 np0005589310 systemd[1]: Started libpod-conmon-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope.
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.708941988 +0000 UTC m=+0.024978155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:11 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.825771601 +0000 UTC m=+0.141807768 container init cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.832829919 +0000 UTC m=+0.148866066 container start cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.836197639 +0000 UTC m=+0.152233786 container attach cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:11 np0005589310 epic_banzai[93442]: 167 167
Jan 20 14:04:11 np0005589310 systemd[1]: libpod-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope: Deactivated successfully.
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.839060077 +0000 UTC m=+0.155096314 container died cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:11 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2e7187c79086a57417d35fc0473df096bd16335a8263463e477d4f9160d438b7-merged.mount: Deactivated successfully.
Jan 20 14:04:11 np0005589310 podman[93401]: 2026-01-20 19:04:11.893563996 +0000 UTC m=+0.209600143 container remove cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:11 np0005589310 systemd[1]: libpod-conmon-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope: Deactivated successfully.
Jan 20 14:04:11 np0005589310 python3[93439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:11 np0005589310 systemd[1]: Reloading.
Jan 20 14:04:11 np0005589310 podman[93458]: 2026-01-20 19:04:11.981329907 +0000 UTC m=+0.050998247 container create 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:04:12 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:12 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:12 np0005589310 podman[93458]: 2026-01-20 19:04:11.963298477 +0000 UTC m=+0.032966857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 14:04:12 np0005589310 systemd[1]: Started libpod-conmon-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope.
Jan 20 14:04:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 podman[93458]: 2026-01-20 19:04:12.307788553 +0000 UTC m=+0.377456913 container init 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:12 np0005589310 systemd[1]: Reloading.
Jan 20 14:04:12 np0005589310 podman[93458]: 2026-01-20 19:04:12.314741339 +0000 UTC m=+0.384409679 container start 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:04:12 np0005589310 podman[93458]: 2026-01-20 19:04:12.317927294 +0000 UTC m=+0.387595634 container attach 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:04:12 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:12 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:12 np0005589310 systemd[1]: Starting Ceph rgw.rgw.compute-0.dbzrzk for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 20 14:04:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 14:04:12 np0005589310 systemd[1]: libpod-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope: Deactivated successfully.
Jan 20 14:04:12 np0005589310 podman[93624]: 2026-01-20 19:04:12.912249232 +0000 UTC m=+0.054280734 container create f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:12 np0005589310 podman[93635]: 2026-01-20 19:04:12.935632638 +0000 UTC m=+0.048152217 container died 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:04:12 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31-merged.mount: Deactivated successfully.
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dbzrzk supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:12 np0005589310 podman[93624]: 2026-01-20 19:04:12.886192261 +0000 UTC m=+0.028223743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:12 np0005589310 podman[93635]: 2026-01-20 19:04:12.99195477 +0000 UTC m=+0.104474319 container remove 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:12 np0005589310 systemd[1]: libpod-conmon-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope: Deactivated successfully.
Jan 20 14:04:13 np0005589310 podman[93624]: 2026-01-20 19:04:13.00371398 +0000 UTC m=+0.145745502 container init f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:13 np0005589310 podman[93624]: 2026-01-20 19:04:13.016217258 +0000 UTC m=+0.158248730 container start f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:13 np0005589310 bash[93624]: f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5
Jan 20 14:04:13 np0005589310 systemd[1]: Started Ceph rgw.rgw.compute-0.dbzrzk for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:04:13 np0005589310 radosgw[93659]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:04:13 np0005589310 radosgw[93659]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 20 14:04:13 np0005589310 radosgw[93659]: framework: beast
Jan 20 14:04:13 np0005589310 radosgw[93659]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 20 14:04:13 np0005589310 radosgw[93659]: init_numa not setting numa affinity
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1))
Jan 20 14:04:13 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 20 14:04:13 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:13 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:13 np0005589310 python3[93795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:13 np0005589310 podman[93821]: 2026-01-20 19:04:13.787146892 +0000 UTC m=+0.045483225 container create 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:04:13 np0005589310 systemd[1]: Started libpod-conmon-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope.
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 20 14:04:13 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 20 14:04:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 20 14:04:13 np0005589310 podman[93846]: 2026-01-20 19:04:13.858496892 +0000 UTC m=+0.073047772 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:04:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:13 np0005589310 podman[93821]: 2026-01-20 19:04:13.768550109 +0000 UTC m=+0.026886472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:13 np0005589310 podman[93821]: 2026-01-20 19:04:13.877963456 +0000 UTC m=+0.136299789 container init 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:04:13 np0005589310 podman[93821]: 2026-01-20 19:04:13.88404577 +0000 UTC m=+0.142382103 container start 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:13 np0005589310 podman[93821]: 2026-01-20 19:04:13.887612095 +0000 UTC m=+0.145948498 container attach 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:04:13 np0005589310 podman[93846]: 2026-01-20 19:04:13.953034313 +0000 UTC m=+0.167585193 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:04:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: Saving service rgw.rgw spec with placement compute-0
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555469958' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 14:04:14 np0005589310 sweet_hypatia[93864]: 
Jan 20 14:04:14 np0005589310 sweet_hypatia[93864]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1768935829,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83841024,"bytes_avail":64328085504,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-20T19:04:08:498809+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T19:03:35.512911+0000","services":{}},"progress_events":{}}
Jan 20 14:04:14 np0005589310 systemd[1]: libpod-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope: Deactivated successfully.
Jan 20 14:04:14 np0005589310 podman[93821]: 2026-01-20 19:04:14.424506204 +0000 UTC m=+0.682842547 container died 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:14 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a-merged.mount: Deactivated successfully.
Jan 20 14:04:14 np0005589310 podman[93821]: 2026-01-20 19:04:14.468921983 +0000 UTC m=+0.727258316 container remove 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:14 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 4 completed events
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:14 np0005589310 systemd[1]: libpod-conmon-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope: Deactivated successfully.
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:14 np0005589310 python3[94075]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:14 np0005589310 podman[94126]: 2026-01-20 19:04:14.825550957 +0000 UTC m=+0.043261242 container create c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 20 14:04:14 np0005589310 systemd[1]: Started libpod-conmon-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope.
Jan 20 14:04:14 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 20 14:04:14 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:14 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:14 np0005589310 podman[94126]: 2026-01-20 19:04:14.804800413 +0000 UTC m=+0.022510718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:14 np0005589310 podman[94126]: 2026-01-20 19:04:14.910581373 +0000 UTC m=+0.128291648 container init c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:14 np0005589310 podman[94126]: 2026-01-20 19:04:14.926859261 +0000 UTC m=+0.144569536 container start c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:14 np0005589310 podman[94126]: 2026-01-20 19:04:14.980672512 +0000 UTC m=+0.198382787 container attach c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.064785096 +0000 UTC m=+0.039143544 container create c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:04:15 np0005589310 systemd[1]: Started libpod-conmon-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope.
Jan 20 14:04:15 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.138674506 +0000 UTC m=+0.113032994 container init c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.048702913 +0000 UTC m=+0.023061381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.147496646 +0000 UTC m=+0.121855114 container start c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.151928391 +0000 UTC m=+0.126286949 container attach c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:15 np0005589310 xenodochial_euclid[94754]: 167 167
Jan 20 14:04:15 np0005589310 systemd[1]: libpod-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope: Deactivated successfully.
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.154199486 +0000 UTC m=+0.128557944 container died c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 14:04:15 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8d9b72fbab088873c31668fb20ff70bd9f02e4c757de19c4bbc766de8f1277a6-merged.mount: Deactivated successfully.
Jan 20 14:04:15 np0005589310 podman[94719]: 2026-01-20 19:04:15.198745237 +0000 UTC m=+0.173103685 container remove c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:15 np0005589310 systemd[1]: libpod-conmon-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope: Deactivated successfully.
Jan 20 14:04:15 np0005589310 podman[94778]: 2026-01-20 19:04:15.348961235 +0000 UTC m=+0.046741004 container create dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:04:15 np0005589310 systemd[1]: Started libpod-conmon-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope.
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006177493' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 20 14:04:15 np0005589310 exciting_ritchie[94142]: 
Jan 20 14:04:15 np0005589310 exciting_ritchie[94142]: {"epoch":1,"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","modified":"2026-01-20T19:02:02.864397Z","created":"2026-01-20T19:02:02.864397Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 20 14:04:15 np0005589310 exciting_ritchie[94142]: dumped monmap epoch 1
Jan 20 14:04:15 np0005589310 podman[94778]: 2026-01-20 19:04:15.331078809 +0000 UTC m=+0.028858598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:15 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:15 np0005589310 systemd[1]: libpod-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope: Deactivated successfully.
Jan 20 14:04:15 np0005589310 podman[94126]: 2026-01-20 19:04:15.442986155 +0000 UTC m=+0.660696430 container died c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:15 np0005589310 podman[94778]: 2026-01-20 19:04:15.458065954 +0000 UTC m=+0.155845743 container init dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:04:15 np0005589310 podman[94778]: 2026-01-20 19:04:15.472849257 +0000 UTC m=+0.170629026 container start dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:15 np0005589310 systemd[1]: var-lib-containers-storage-overlay-263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11-merged.mount: Deactivated successfully.
Jan 20 14:04:15 np0005589310 podman[94778]: 2026-01-20 19:04:15.479841913 +0000 UTC m=+0.177621682 container attach dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:04:15 np0005589310 podman[94126]: 2026-01-20 19:04:15.49483242 +0000 UTC m=+0.712542695 container remove c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:15 np0005589310 systemd[1]: libpod-conmon-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope: Deactivated successfully.
Jan 20 14:04:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v75: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 20 14:04:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 20 14:04:15 np0005589310 relaxed_gauss[94795]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:04:15 np0005589310 relaxed_gauss[94795]: --> All data devices are unavailable
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94778]: 2026-01-20 19:04:16.014283643 +0000 UTC m=+0.712063412 container died dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:16 np0005589310 python3[94847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:16 np0005589310 systemd[1]: var-lib-containers-storage-overlay-86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23-merged.mount: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94778]: 2026-01-20 19:04:16.090286274 +0000 UTC m=+0.788066043 container remove dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-conmon-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.123446764 +0000 UTC m=+0.067902808 container create d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:16 np0005589310 systemd[1]: Started libpod-conmon-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope.
Jan 20 14:04:16 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.104130563 +0000 UTC m=+0.048586627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.198028631 +0000 UTC m=+0.142484695 container init d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.212978666 +0000 UTC m=+0.157434700 container start d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.216278326 +0000 UTC m=+0.160734370 container attach d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.561451347 +0000 UTC m=+0.051316373 container create c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:16 np0005589310 systemd[1]: Started libpod-conmon-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope.
Jan 20 14:04:16 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:16 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.62241119 +0000 UTC m=+0.112276236 container init c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.627642344 +0000 UTC m=+0.117507370 container start c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.630281767 +0000 UTC m=+0.120146803 container attach c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:04:16 np0005589310 thirsty_sanderson[94981]: 167 167
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.632460339 +0000 UTC m=+0.122325365 container died c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.545566539 +0000 UTC m=+0.035431585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:16 np0005589310 systemd[1]: var-lib-containers-storage-overlay-559c91c3cf4130e0629ea69349f70f049ed3ed0b12ee9ee21b3ce41f4e54452a-merged.mount: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94965]: 2026-01-20 19:04:16.668789364 +0000 UTC m=+0.158654390 container remove c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-conmon-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1968822085' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 20 14:04:16 np0005589310 youthful_fermat[94888]: [client.openstack]
Jan 20 14:04:16 np0005589310 youthful_fermat[94888]: #011key = AQD40G9pAAAAABAAnCl2JBwdjyAhlZdo4nlc0A==
Jan 20 14:04:16 np0005589310 youthful_fermat[94888]: #011caps mgr = "allow *"
Jan 20 14:04:16 np0005589310 youthful_fermat[94888]: #011caps mon = "profile rbd"
Jan 20 14:04:16 np0005589310 youthful_fermat[94888]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.735845722 +0000 UTC m=+0.680301806 container died d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:04:16 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec-merged.mount: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[94860]: 2026-01-20 19:04:16.7848662 +0000 UTC m=+0.729322244 container remove d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:16 np0005589310 systemd[1]: libpod-conmon-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope: Deactivated successfully.
Jan 20 14:04:16 np0005589310 podman[95019]: 2026-01-20 19:04:16.835432384 +0000 UTC m=+0.045141806 container create 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 20 14:04:16 np0005589310 systemd[1]: Started libpod-conmon-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope.
Jan 20 14:04:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 20 14:04:16 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:16 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:16 np0005589310 podman[95019]: 2026-01-20 19:04:16.816194175 +0000 UTC m=+0.025903617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:16 np0005589310 podman[95019]: 2026-01-20 19:04:16.912113991 +0000 UTC m=+0.121823433 container init 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:04:16 np0005589310 podman[95019]: 2026-01-20 19:04:16.919465936 +0000 UTC m=+0.129175358 container start 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:04:16 np0005589310 podman[95019]: 2026-01-20 19:04:16.922730133 +0000 UTC m=+0.132439555 container attach 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/1968822085' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]: {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    "0": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "devices": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "/dev/loop3"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            ],
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_name": "ceph_lv0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_size": "21470642176",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "name": "ceph_lv0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "tags": {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.crush_device_class": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.encrypted": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_id": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.vdo": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.with_tpm": "0"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            },
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "vg_name": "ceph_vg0"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        }
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    ],
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    "1": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "devices": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "/dev/loop4"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            ],
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_name": "ceph_lv1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_size": "21470642176",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "name": "ceph_lv1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "tags": {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.crush_device_class": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.encrypted": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_id": "1",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.vdo": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.with_tpm": "0"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            },
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "vg_name": "ceph_vg1"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        }
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    ],
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    "2": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "devices": [
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "/dev/loop5"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            ],
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_name": "ceph_lv2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_size": "21470642176",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "name": "ceph_lv2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "tags": {
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.crush_device_class": "",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.encrypted": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osd_id": "2",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.vdo": "0",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:                "ceph.with_tpm": "0"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            },
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "type": "block",
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:            "vg_name": "ceph_vg2"
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:        }
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]:    ]
Jan 20 14:04:17 np0005589310 naughty_wiles[95035]: }
Jan 20 14:04:17 np0005589310 systemd[1]: libpod-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope: Deactivated successfully.
Jan 20 14:04:17 np0005589310 podman[95019]: 2026-01-20 19:04:17.298947065 +0000 UTC m=+0.508656487 container died 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:04:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a-merged.mount: Deactivated successfully.
Jan 20 14:04:17 np0005589310 podman[95019]: 2026-01-20 19:04:17.354418776 +0000 UTC m=+0.564128198 container remove 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:17 np0005589310 systemd[1]: libpod-conmon-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope: Deactivated successfully.
Jan 20 14:04:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v78: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.836049679 +0000 UTC m=+0.042712758 container create 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 20 14:04:17 np0005589310 systemd[1]: Started libpod-conmon-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope.
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 20 14:04:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 20 14:04:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.818401519 +0000 UTC m=+0.025064618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.92299085 +0000 UTC m=+0.129653959 container init 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.929746671 +0000 UTC m=+0.136409750 container start 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.93307376 +0000 UTC m=+0.139736839 container attach 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:17 np0005589310 bold_perlman[95191]: 167 167
Jan 20 14:04:17 np0005589310 systemd[1]: libpod-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope: Deactivated successfully.
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.936663325 +0000 UTC m=+0.143326424 container died 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:04:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e083229f607171989f7913189abe35db9bdc8e2bc12f8e9f4b32ee5fc5be7393-merged.mount: Deactivated successfully.
Jan 20 14:04:17 np0005589310 podman[95146]: 2026-01-20 19:04:17.975138292 +0000 UTC m=+0.181801371 container remove 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:04:17 np0005589310 systemd[1]: libpod-conmon-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope: Deactivated successfully.
Jan 20 14:04:18 np0005589310 podman[95287]: 2026-01-20 19:04:18.173249621 +0000 UTC m=+0.070742826 container create 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 20 14:04:18 np0005589310 systemd[1]: Started libpod-conmon-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope.
Jan 20 14:04:18 np0005589310 podman[95287]: 2026-01-20 19:04:18.139959919 +0000 UTC m=+0.037453194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:18 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 ansible-async_wrapper.py[95323]: Invoked with j778018551134 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935857.7789674-36768-47467053585400/AnsiballZ_command.py _
Jan 20 14:04:18 np0005589310 podman[95287]: 2026-01-20 19:04:18.273199482 +0000 UTC m=+0.170692687 container init 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:18 np0005589310 ansible-async_wrapper.py[95333]: Starting module and watcher
Jan 20 14:04:18 np0005589310 ansible-async_wrapper.py[95333]: Start watching 95334 (30)
Jan 20 14:04:18 np0005589310 ansible-async_wrapper.py[95334]: Start module (95334)
Jan 20 14:04:18 np0005589310 podman[95287]: 2026-01-20 19:04:18.281315265 +0000 UTC m=+0.178808450 container start 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:04:18 np0005589310 ansible-async_wrapper.py[95323]: Return async_wrapper task started.
Jan 20 14:04:18 np0005589310 podman[95287]: 2026-01-20 19:04:18.28948653 +0000 UTC m=+0.186979745 container attach 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:04:18 np0005589310 python3[95336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:18 np0005589310 podman[95337]: 2026-01-20 19:04:18.524475098 +0000 UTC m=+0.064888187 container create 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:04:18 np0005589310 systemd[1]: Started libpod-conmon-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope.
Jan 20 14:04:18 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:18 np0005589310 podman[95337]: 2026-01-20 19:04:18.593775109 +0000 UTC m=+0.134188228 container init 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:18 np0005589310 podman[95337]: 2026-01-20 19:04:18.503266712 +0000 UTC m=+0.043679841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:18 np0005589310 podman[95337]: 2026-01-20 19:04:18.599180587 +0000 UTC m=+0.139593676 container start 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:04:18 np0005589310 podman[95337]: 2026-01-20 19:04:18.603329175 +0000 UTC m=+0.143742284 container attach 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 20 14:04:18 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 20 14:04:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:19 np0005589310 lvm[95451]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:19 np0005589310 lvm[95451]: VG ceph_vg1 finished
Jan 20 14:04:19 np0005589310 lvm[95448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:19 np0005589310 lvm[95448]: VG ceph_vg0 finished
Jan 20 14:04:19 np0005589310 lvm[95453]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:04:19 np0005589310 lvm[95453]: VG ceph_vg2 finished
Jan 20 14:04:19 np0005589310 lvm[95454]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:19 np0005589310 lvm[95454]: VG ceph_vg1 finished
Jan 20 14:04:19 np0005589310 lvm[95455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:19 np0005589310 lvm[95455]: VG ceph_vg0 finished
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:04:19 np0005589310 lvm[95456]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:19 np0005589310 lvm[95456]: VG ceph_vg1 finished
Jan 20 14:04:19 np0005589310 lucid_colden[95362]: 
Jan 20 14:04:19 np0005589310 lucid_colden[95362]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 14:04:19 np0005589310 systemd[1]: libpod-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope: Deactivated successfully.
Jan 20 14:04:19 np0005589310 conmon[95362]: conmon 80d324d76f8d69626357 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope/container/memory.events
Jan 20 14:04:19 np0005589310 podman[95337]: 2026-01-20 19:04:19.146548325 +0000 UTC m=+0.686961434 container died 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:04:19 np0005589310 lucid_sinoussi[95327]: {}
Jan 20 14:04:19 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad-merged.mount: Deactivated successfully.
Jan 20 14:04:19 np0005589310 podman[95337]: 2026-01-20 19:04:19.203974923 +0000 UTC m=+0.744388012 container remove 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:04:19 np0005589310 systemd[1]: libpod-conmon-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope: Deactivated successfully.
Jan 20 14:04:19 np0005589310 systemd[1]: libpod-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Deactivated successfully.
Jan 20 14:04:19 np0005589310 systemd[1]: libpod-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Consumed 1.445s CPU time.
Jan 20 14:04:19 np0005589310 podman[95287]: 2026-01-20 19:04:19.214207258 +0000 UTC m=+1.111700443 container died 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:19 np0005589310 ansible-async_wrapper.py[95334]: Module complete (95334)
Jan 20 14:04:19 np0005589310 systemd[1]: var-lib-containers-storage-overlay-5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b-merged.mount: Deactivated successfully.
Jan 20 14:04:19 np0005589310 podman[95287]: 2026-01-20 19:04:19.250889821 +0000 UTC m=+1.148383006 container remove 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:19 np0005589310 systemd[1]: libpod-conmon-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Deactivated successfully.
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1))
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Jan 20 14:04:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v81: 10 pgs: 1 unknown, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 14:04:19 np0005589310 python3[95599]: ansible-ansible.legacy.async_status Invoked with jid=j778018551134.95323 mode=status _async_dir=/root/.ansible_async
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 20 14:04:19 np0005589310 podman[95623]: 2026-01-20 19:04:19.925426729 +0000 UTC m=+0.091765647 container create d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 14:04:19 np0005589310 ceph-mon[75120]: Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 14:04:19 np0005589310 podman[95623]: 2026-01-20 19:04:19.86498362 +0000 UTC m=+0.031322588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:19 np0005589310 systemd[1]: Started libpod-conmon-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope.
Jan 20 14:04:20 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:20 np0005589310 podman[95623]: 2026-01-20 19:04:20.055752924 +0000 UTC m=+0.222091892 container init d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:20 np0005589310 podman[95623]: 2026-01-20 19:04:20.065743222 +0000 UTC m=+0.232082120 container start d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:20 np0005589310 loving_pike[95664]: 167 167
Jan 20 14:04:20 np0005589310 systemd[1]: libpod-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope: Deactivated successfully.
Jan 20 14:04:20 np0005589310 podman[95623]: 2026-01-20 19:04:20.098882931 +0000 UTC m=+0.265221829 container attach d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:04:20 np0005589310 podman[95623]: 2026-01-20 19:04:20.099785762 +0000 UTC m=+0.266124640 container died d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 20 14:04:20 np0005589310 systemd[1]: var-lib-containers-storage-overlay-47a3e654e355d3ceb78f6bdc32eb8f86922d34d08236018346899dbffcf2ed48-merged.mount: Deactivated successfully.
Jan 20 14:04:20 np0005589310 podman[95623]: 2026-01-20 19:04:20.167911885 +0000 UTC m=+0.334250763 container remove d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:20 np0005589310 systemd[1]: libpod-conmon-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope: Deactivated successfully.
Jan 20 14:04:20 np0005589310 python3[95692]: ansible-ansible.legacy.async_status Invoked with jid=j778018551134.95323 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 14:04:20 np0005589310 systemd[1]: Reloading.
Jan 20 14:04:20 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:20 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:20 np0005589310 systemd[1]: Reloading.
Jan 20 14:04:20 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:04:20 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:04:20 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:20 np0005589310 systemd[1]: Starting Ceph mds.cephfs.compute-0.djcctc for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 20 14:04:20 np0005589310 python3[95811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 20 14:04:20 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 14:04:20 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 20 14:04:21 np0005589310 podman[95849]: 2026-01-20 19:04:21.002103996 +0000 UTC m=+0.047349099 container create b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 20 14:04:21 np0005589310 podman[95869]: 2026-01-20 19:04:21.032680405 +0000 UTC m=+0.053965017 container create 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:04:21 np0005589310 systemd[1]: Started libpod-conmon-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope.
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.djcctc supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:21 np0005589310 podman[95869]: 2026-01-20 19:04:21.075700919 +0000 UTC m=+0.096985561 container init 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:04:21 np0005589310 podman[95849]: 2026-01-20 19:04:20.982055268 +0000 UTC m=+0.027300381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:21 np0005589310 podman[95869]: 2026-01-20 19:04:21.084860058 +0000 UTC m=+0.106144680 container start 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:21 np0005589310 bash[95869]: 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc
Jan 20 14:04:21 np0005589310 podman[95869]: 2026-01-20 19:04:21.0140457 +0000 UTC m=+0.035330342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:21 np0005589310 podman[95849]: 2026-01-20 19:04:21.09460004 +0000 UTC m=+0.139845153 container init b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:21 np0005589310 systemd[1]: Started Ceph mds.cephfs.compute-0.djcctc for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 14:04:21 np0005589310 podman[95849]: 2026-01-20 19:04:21.102556148 +0000 UTC m=+0.147801241 container start b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:04:21 np0005589310 podman[95849]: 2026-01-20 19:04:21.105896749 +0000 UTC m=+0.151141842 container attach b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:04:21 np0005589310 ceph-mds[95894]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:04:21 np0005589310 ceph-mds[95894]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 20 14:04:21 np0005589310 ceph-mds[95894]: main not setting numa affinity
Jan 20 14:04:21 np0005589310 ceph-mds[95894]: pidfile_write: ignore empty --pid-file
Jan 20 14:04:21 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc[95889]: starting mds.cephfs.compute-0.djcctc at 
Jan 20 14:04:21 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 2 from mon.0
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:21 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1))
Jan 20 14:04:21 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 14:04:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 14:04:21 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 20 14:04:22 np0005589310 infallible_cerf[95887]: 
Jan 20 14:04:22 np0005589310 infallible_cerf[95887]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 new map
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-20T19:04:22:675421+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-20T19:04:08.498557+0000#012modified#0112026-01-20T19:04:08.498557+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djcctc{-1:14258} state up:standby seq 1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 3 from mon.0
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Monitors have assigned me to become a standby
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:boot
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] as mds.0
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.djcctc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.djcctc"} v 0)
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.djcctc"} : dispatch
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 all = 0
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e4 new map
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-20T19:04:22:696302+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-20T19:04:08.498557+0000#012modified#0112026-01-20T19:04:22.696296+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14258}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.djcctc{0:14258} state up:creating seq 1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 20 14:04:22 np0005589310 systemd[1]: libpod-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope: Deactivated successfully.
Jan 20 14:04:22 np0005589310 podman[95849]: 2026-01-20 19:04:22.702706945 +0000 UTC m=+1.747952038 container died b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 4 from mon.0
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:creating}
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x1
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x100
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x600
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x601
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x602
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x603
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x604
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x605
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x606
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x607
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x608
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x609
Jan 20 14:04:22 np0005589310 ceph-mds[95894]: mds.0.4 creating_done
Jan 20 14:04:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.djcctc is now active in filesystem cephfs as rank 0
Jan 20 14:04:22 np0005589310 systemd[1]: var-lib-containers-storage-overlay-557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119-merged.mount: Deactivated successfully.
Jan 20 14:04:22 np0005589310 podman[95849]: 2026-01-20 19:04:22.770885669 +0000 UTC m=+1.816130762 container remove b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:04:22 np0005589310 systemd[1]: libpod-conmon-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope: Deactivated successfully.
Jan 20 14:04:22 np0005589310 podman[96093]: 2026-01-20 19:04:22.886484143 +0000 UTC m=+0.057291705 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:04:22 np0005589310 radosgw[93659]: v1 topic migration: starting v1 topic migration..
Jan 20 14:04:22 np0005589310 radosgw[93659]: v1 topic migration: finished v1 topic migration
Jan 20 14:04:22 np0005589310 radosgw[93659]: framework: beast
Jan 20 14:04:22 np0005589310 radosgw[93659]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 20 14:04:22 np0005589310 radosgw[93659]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 20 14:04:22 np0005589310 podman[96093]: 2026-01-20 19:04:22.989728642 +0000 UTC m=+0.160536214 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 14:04:22 np0005589310 radosgw[93659]: starting handler: beast
Jan 20 14:04:23 np0005589310 radosgw[93659]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 14:04:23 np0005589310 radosgw[93659]: mgrc service_daemon_register rgw.14250 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dbzrzk,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6427199d-52d7-4810-99bf-ec966a7007f4,zone_name=default,zonegroup_id=7f3fa8c0-913b-4a23-89e0-2cf7070dd47e,zonegroup_name=default}
Jan 20 14:04:23 np0005589310 ansible-async_wrapper.py[95333]: Done in kid B.
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.9 KiB/s wr, 4 op/s
Jan 20 14:04:23 np0005589310 python3[96265]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:23 np0005589310 podman[96292]: 2026-01-20 19:04:23.687879593 +0000 UTC m=+0.047700918 container create 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: daemon mds.cephfs.compute-0.djcctc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: Cluster is now healthy
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: daemon mds.cephfs.compute-0.djcctc is now active in filesystem cephfs as rank 0
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e5 new map
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-20T19:04:23:700833+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-20T19:04:08.498557+0000#012modified#0112026-01-20T19:04:23.700829+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14258}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14258 members: 14258#012[mds.cephfs.compute-0.djcctc{0:14258} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 20 14:04:23 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 5 from mon.0
Jan 20 14:04:23 np0005589310 ceph-mds[95894]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 20 14:04:23 np0005589310 ceph-mds[95894]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 20 14:04:23 np0005589310 ceph-mds[95894]: mds.0.4 recovery_done -- successful recovery!
Jan 20 14:04:23 np0005589310 ceph-mds[95894]: mds.0.4 active_start
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:active
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:active}
Jan 20 14:04:23 np0005589310 systemd[1]: Started libpod-conmon-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope.
Jan 20 14:04:23 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:23 np0005589310 podman[96292]: 2026-01-20 19:04:23.663503051 +0000 UTC m=+0.023324396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:23 np0005589310 podman[96292]: 2026-01-20 19:04:23.779924626 +0000 UTC m=+0.139745951 container init 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:04:23 np0005589310 podman[96292]: 2026-01-20 19:04:23.78600973 +0000 UTC m=+0.145831055 container start 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:23 np0005589310 podman[96292]: 2026-01-20 19:04:23.789303178 +0000 UTC m=+0.149124503 container attach 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:24 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 14:04:24 np0005589310 goofy_hypatia[96329]: 
Jan 20 14:04:24 np0005589310 goofy_hypatia[96329]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 20 14:04:24 np0005589310 systemd[1]: libpod-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope: Deactivated successfully.
Jan 20 14:04:24 np0005589310 podman[96292]: 2026-01-20 19:04:24.232129357 +0000 UTC m=+0.591950672 container died 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:04:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412-merged.mount: Deactivated successfully.
Jan 20 14:04:24 np0005589310 podman[96292]: 2026-01-20 19:04:24.273771938 +0000 UTC m=+0.633593263 container remove 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:04:24 np0005589310 systemd[1]: libpod-conmon-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope: Deactivated successfully.
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.292489365 +0000 UTC m=+0.050101415 container create cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:24 np0005589310 systemd[1]: Started libpod-conmon-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope.
Jan 20 14:04:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.358771594 +0000 UTC m=+0.116383654 container init cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.365486334 +0000 UTC m=+0.123098384 container start cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:04:24 np0005589310 trusting_kapitsa[96456]: 167 167
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.272597541 +0000 UTC m=+0.030209611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:24 np0005589310 systemd[1]: libpod-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope: Deactivated successfully.
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.371137798 +0000 UTC m=+0.128749898 container attach cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.372543592 +0000 UTC m=+0.130155672 container died cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:04:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2a9bb0019e6831657ddca07dfc901de3ed652b5e8d4c673d6268a58a1acb2764-merged.mount: Deactivated successfully.
Jan 20 14:04:24 np0005589310 podman[96428]: 2026-01-20 19:04:24.409601064 +0000 UTC m=+0.167213114 container remove cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:04:24 np0005589310 systemd[1]: libpod-conmon-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope: Deactivated successfully.
Jan 20 14:04:24 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 5 completed events
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:24 np0005589310 podman[96480]: 2026-01-20 19:04:24.562299851 +0000 UTC m=+0.038893386 container create b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:04:24 np0005589310 systemd[1]: Started libpod-conmon-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope.
Jan 20 14:04:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:24 np0005589310 podman[96480]: 2026-01-20 19:04:24.54670317 +0000 UTC m=+0.023296725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:24 np0005589310 podman[96480]: 2026-01-20 19:04:24.647153403 +0000 UTC m=+0.123746958 container init b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:04:24 np0005589310 podman[96480]: 2026-01-20 19:04:24.660765588 +0000 UTC m=+0.137359123 container start b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:04:24 np0005589310 podman[96480]: 2026-01-20 19:04:24.664309351 +0000 UTC m=+0.140902886 container attach b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:25 np0005589310 stoic_antonelli[96497]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:04:25 np0005589310 stoic_antonelli[96497]: --> All data devices are unavailable
Jan 20 14:04:25 np0005589310 systemd[1]: libpod-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope: Deactivated successfully.
Jan 20 14:04:25 np0005589310 podman[96480]: 2026-01-20 19:04:25.121894691 +0000 UTC m=+0.598488236 container died b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183-merged.mount: Deactivated successfully.
Jan 20 14:04:25 np0005589310 podman[96480]: 2026-01-20 19:04:25.161743581 +0000 UTC m=+0.638337116 container remove b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:04:25 np0005589310 systemd[1]: libpod-conmon-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope: Deactivated successfully.
Jan 20 14:04:25 np0005589310 python3[96542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.325743208 +0000 UTC m=+0.074226909 container create 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 14:04:25 np0005589310 systemd[1]: Started libpod-conmon-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope.
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.276684049 +0000 UTC m=+0.025167810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.414503322 +0000 UTC m=+0.162987083 container init 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.426650071 +0000 UTC m=+0.175133782 container start 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.430588175 +0000 UTC m=+0.179071926 container attach 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:04:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 13 KiB/s wr, 243 op/s
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.703301741 +0000 UTC m=+0.058500394 container create d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:04:25 np0005589310 systemd[1]: Started libpod-conmon-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope.
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.677577738 +0000 UTC m=+0.032776471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.799501102 +0000 UTC m=+0.154699785 container init d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.806509269 +0000 UTC m=+0.161707912 container start d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:25 np0005589310 pedantic_hopper[96671]: 167 167
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.809983243 +0000 UTC m=+0.165181886 container attach d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:25 np0005589310 systemd[1]: libpod-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope: Deactivated successfully.
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.811190391 +0000 UTC m=+0.166389034 container died d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:04:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8f7ddb83a8a6af510785bb39b8e125cf2bea071318d8d4f6def881e58d8f4889-merged.mount: Deactivated successfully.
Jan 20 14:04:25 np0005589310 podman[96654]: 2026-01-20 19:04:25.850314912 +0000 UTC m=+0.205513555 container remove d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:04:25 np0005589310 systemd[1]: libpod-conmon-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope: Deactivated successfully.
Jan 20 14:04:25 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 14:04:25 np0005589310 goofy_ellis[96618]: 
Jan 20 14:04:25 np0005589310 goofy_ellis[96618]: [{"container_id": "6869885aa1d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.20%", "created": "2026-01-20T19:03:00.062927Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-20T19:03:00.382853Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830291Z", "memory_usage": 7808745, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-20T19:02:59.224839Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@crash.compute-0", "version": "20.2.0"}, {"container_id": "83d8b470dcb9", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.76%", "created": "2026-01-20T19:04:21.098040Z", "daemon_id": "cephfs.compute-0.djcctc", "daemon_name": "mds.cephfs.compute-0.djcctc", "daemon_type": "mds", "events": ["2026-01-20T19:04:21.171052Z daemon:mds.cephfs.compute-0.djcctc [INFO] \"Deployed mds.cephfs.compute-0.djcctc on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830818Z", "memory_usage": 16001269, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-20T19:04:21.018164Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mds.cephfs.compute-0.djcctc", "version": "20.2.0"}, {"container_id": "60642dffa907", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "16.08%", "created": "2026-01-20T19:02:09.582754Z", "daemon_id": "compute-0.meyjbf", "daemon_name": "mgr.compute-0.meyjbf", "daemon_type": "mgr", "events": ["2026-01-20T19:03:06.415164Z daemon:mgr.compute-0.meyjbf [INFO] \"Reconfigured mgr.compute-0.meyjbf on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830193Z", "memory_usage": 549139251, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-20T19:02:09.191123Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.meyjbf", "version": "20.2.0"}, {"container_id": "b5c99f106188", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.66%", "created": "2026-01-20T19:02:04.845645Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-20T19:03:05.023681Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830061Z", "memory_request": 2147483648, "memory_usage": 42739957, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-20T19:02:07.125921Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mon.compute-0", "version": "20.2.0"}, {"container_id": "eabc59bf78c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2026-01-20T19:03:31.006883Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-20T19:03:31.072531Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830415Z", "memory_request": 4294967296, "memory_usage": 60628664, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:30.915540Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.0", "version": "20.2.0"}, {"container_id": "bfb3a392dadb", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.72%", "created": "2026-01-20T19:03:35.234981Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-20T19:03:35.344145Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830512Z", "memory_request": 4294967296, "memory_usage": 59663974, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:35.071683Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.1", "version": "20.2.0"}, {"container_id": "d045a60defb8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.65%", "created": "2026-01-20T19:03:41.789132Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-20T19:03:41.897662Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830605Z", "memory_request": 4294967296, "memory_usage": 57860423, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:41.681331Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.2", "version": "20.2.0"}, {"container_id": "f7b32e8a4eac", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Jan 20 14:04:25 np0005589310 systemd[1]: libpod-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope: Deactivated successfully.
Jan 20 14:04:25 np0005589310 podman[96571]: 2026-01-20 19:04:25.890750876 +0000 UTC m=+0.639234537 container died 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:26 np0005589310 systemd[1]: var-lib-containers-storage-overlay-dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63-merged.mount: Deactivated successfully.
Jan 20 14:04:26 np0005589310 rsyslogd[1007]: message too long (8843) with configured size 8096, begin of message is: [{"container_id": "6869885aa1d5", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 20 14:04:26 np0005589310 podman[96571]: 2026-01-20 19:04:26.046933617 +0000 UTC m=+0.795417298 container remove 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.03368274 +0000 UTC m=+0.042347219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.169765973 +0000 UTC m=+0.178430472 container create 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:04:26 np0005589310 systemd[1]: libpod-conmon-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope: Deactivated successfully.
Jan 20 14:04:26 np0005589310 systemd[1]: Started libpod-conmon-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope.
Jan 20 14:04:26 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:26 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.275541692 +0000 UTC m=+0.284206171 container init 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.281564725 +0000 UTC m=+0.290229184 container start 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.284470654 +0000 UTC m=+0.293135113 container attach 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]: {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    "0": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "devices": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "/dev/loop3"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            ],
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_name": "ceph_lv0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_size": "21470642176",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "name": "ceph_lv0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "tags": {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.crush_device_class": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.encrypted": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_id": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.vdo": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.with_tpm": "0"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            },
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "vg_name": "ceph_vg0"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        }
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    ],
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    "1": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "devices": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "/dev/loop4"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            ],
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_name": "ceph_lv1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_size": "21470642176",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "name": "ceph_lv1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "tags": {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.crush_device_class": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.encrypted": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_id": "1",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.vdo": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.with_tpm": "0"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            },
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "vg_name": "ceph_vg1"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        }
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    ],
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    "2": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "devices": [
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "/dev/loop5"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            ],
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_name": "ceph_lv2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_size": "21470642176",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "name": "ceph_lv2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "tags": {
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.crush_device_class": "",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.encrypted": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osd_id": "2",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.vdo": "0",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:                "ceph.with_tpm": "0"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            },
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "type": "block",
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:            "vg_name": "ceph_vg2"
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:        }
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]:    ]
Jan 20 14:04:26 np0005589310 wizardly_snyder[96726]: }
Jan 20 14:04:26 np0005589310 systemd[1]: libpod-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope: Deactivated successfully.
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.633299154 +0000 UTC m=+0.641963643 container died 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:04:26 np0005589310 systemd[1]: var-lib-containers-storage-overlay-be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561-merged.mount: Deactivated successfully.
Jan 20 14:04:26 np0005589310 podman[96708]: 2026-01-20 19:04:26.733771927 +0000 UTC m=+0.742436386 container remove 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 20 14:04:26 np0005589310 systemd[1]: libpod-conmon-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope: Deactivated successfully.
Jan 20 14:04:27 np0005589310 python3[96796]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.108971214 +0000 UTC m=+0.051557039 container create 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:27 np0005589310 systemd[1]: Started libpod-conmon-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope.
Jan 20 14:04:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.085342271 +0000 UTC m=+0.027928136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.187843464 +0000 UTC m=+0.130429299 container init 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.195439354 +0000 UTC m=+0.138025169 container start 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.200372182 +0000 UTC m=+0.142957997 container attach 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.222598511 +0000 UTC m=+0.037701798 container create 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:04:27 np0005589310 systemd[1]: Started libpod-conmon-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope.
Jan 20 14:04:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.291201395 +0000 UTC m=+0.106304702 container init 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.298383747 +0000 UTC m=+0.113487034 container start 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:04:27 np0005589310 adoring_nightingale[96868]: 167 167
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.301560473 +0000 UTC m=+0.116663760 container attach 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:04:27 np0005589310 systemd[1]: libpod-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope: Deactivated successfully.
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.302630348 +0000 UTC m=+0.117733655 container died 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.205873844 +0000 UTC m=+0.020977161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4c7d836dc5bdbc06d7a84fcf109bc5ae5ba22221c72e6c84c3fd3b4b0d57c8eb-merged.mount: Deactivated successfully.
Jan 20 14:04:27 np0005589310 podman[96850]: 2026-01-20 19:04:27.342407945 +0000 UTC m=+0.157511232 container remove 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:27 np0005589310 systemd[1]: libpod-conmon-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope: Deactivated successfully.
Jan 20 14:04:27 np0005589310 podman[96912]: 2026-01-20 19:04:27.509552216 +0000 UTC m=+0.053571656 container create dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:04:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 10 KiB/s wr, 192 op/s
Jan 20 14:04:27 np0005589310 systemd[1]: Started libpod-conmon-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope.
Jan 20 14:04:27 np0005589310 podman[96912]: 2026-01-20 19:04:27.483385314 +0000 UTC m=+0.027404764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:27 np0005589310 podman[96912]: 2026-01-20 19:04:27.610598794 +0000 UTC m=+0.154618254 container init dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:04:27 np0005589310 podman[96912]: 2026-01-20 19:04:27.618063842 +0000 UTC m=+0.162083262 container start dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 14:04:27 np0005589310 podman[96912]: 2026-01-20 19:04:27.632791262 +0000 UTC m=+0.176810692 container attach dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:04:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 14:04:27 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946416481' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 14:04:27 np0005589310 reverent_diffie[96844]: 
Jan 20 14:04:27 np0005589310 reverent_diffie[96844]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":140,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1768935829,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":11}],"num_pgs":11,"num_pools":11,"num_objects":249,"data_bytes":472000,"bytes_used":84451328,"bytes_avail":64327475200,"bytes_total":64411926528,"read_bytes_sec":92474,"write_bytes_sec":13308,"read_op_per_sec":151,"write_op_per_sec":92},"fsmap":{"epoch":5,"btime":"2026-01-20T19:04:23:700833+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.djcctc","status":"up:active","gid":14258}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-20T19:04:23.524411+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.djcctc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14250":{"start_epoch":3,"start_stamp":"2026-01-20T19:04:23.021828+0000","gid":14250,"addr":"192.168.122.100:0/3430692269","metadata":{"arch":"x86_64","ceph_release":"tentacle","ceph_version":"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)","ceph_version_short":"20.2.0","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dbzrzk","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6427199d-52d7-4810-99bf-ec966a7007f4","zone_name":"default","zonegroup_id":"7f3fa8c0-913b-4a23-89e0-2cf7070dd47e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"b6bedcb9-562c-42b4-be71-c432b8518626":{"message":"Global Recovery Event (5s)\n      [=========================...] ","progress":0.90909093618392944,"add_to_ceph_s":true}}}
Jan 20 14:04:27 np0005589310 ceph-mds[95894]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 14:04:27 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc[95889]: 2026-01-20T19:04:27.717+0000 7f97ce1b8640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 14:04:27 np0005589310 systemd[1]: libpod-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope: Deactivated successfully.
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.727056228 +0000 UTC m=+0.669642043 container died 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:04:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay-dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2-merged.mount: Deactivated successfully.
Jan 20 14:04:27 np0005589310 podman[96822]: 2026-01-20 19:04:27.769907728 +0000 UTC m=+0.712493543 container remove 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:27 np0005589310 systemd[1]: libpod-conmon-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope: Deactivated successfully.
Jan 20 14:04:28 np0005589310 lvm[97022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:28 np0005589310 lvm[97023]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:28 np0005589310 lvm[97023]: VG ceph_vg1 finished
Jan 20 14:04:28 np0005589310 lvm[97022]: VG ceph_vg0 finished
Jan 20 14:04:28 np0005589310 lvm[97025]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:04:28 np0005589310 lvm[97025]: VG ceph_vg2 finished
Jan 20 14:04:28 np0005589310 lvm[97026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:28 np0005589310 lvm[97026]: VG ceph_vg0 finished
Jan 20 14:04:28 np0005589310 inspiring_moser[96929]: {}
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:28 np0005589310 systemd[1]: libpod-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Deactivated successfully.
Jan 20 14:04:28 np0005589310 systemd[1]: libpod-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Consumed 1.343s CPU time.
Jan 20 14:04:28 np0005589310 podman[96912]: 2026-01-20 19:04:28.478868617 +0000 UTC m=+1.022888047 container died dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:28 np0005589310 systemd[1]: var-lib-containers-storage-overlay-be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411-merged.mount: Deactivated successfully.
Jan 20 14:04:28 np0005589310 podman[96912]: 2026-01-20 19:04:28.681489623 +0000 UTC m=+1.225509023 container remove dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:28 np0005589310 systemd[1]: libpod-conmon-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Deactivated successfully.
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 python3[97066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:28 np0005589310 podman[97070]: 2026-01-20 19:04:28.808029997 +0000 UTC m=+0.039010280 container create b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:28 np0005589310 systemd[1]: Started libpod-conmon-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope.
Jan 20 14:04:28 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:28 np0005589310 podman[97070]: 2026-01-20 19:04:28.790740735 +0000 UTC m=+0.021721038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:28 np0005589310 podman[97070]: 2026-01-20 19:04:28.887865679 +0000 UTC m=+0.118845962 container init b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:04:28 np0005589310 podman[97070]: 2026-01-20 19:04:28.898970624 +0000 UTC m=+0.129950907 container start b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:28 np0005589310 podman[97070]: 2026-01-20 19:04:28.904641809 +0000 UTC m=+0.135622092 container attach b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:29 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 14:04:29 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479330853' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 14:04:29 np0005589310 xenodochial_shannon[97113]: 
Jan 20 14:04:29 np0005589310 xenodochial_shannon[97113]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dbzrzk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 20 14:04:29 np0005589310 systemd[1]: libpod-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope: Deactivated successfully.
Jan 20 14:04:29 np0005589310 podman[97070]: 2026-01-20 19:04:29.344849052 +0000 UTC m=+0.575829335 container died b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:04:29 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91-merged.mount: Deactivated successfully.
Jan 20 14:04:29 np0005589310 podman[97227]: 2026-01-20 19:04:29.372508978 +0000 UTC m=+0.068766171 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:04:29 np0005589310 podman[97070]: 2026-01-20 19:04:29.383769335 +0000 UTC m=+0.614749618 container remove b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:04:29 np0005589310 systemd[1]: libpod-conmon-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope: Deactivated successfully.
Jan 20 14:04:29 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event b6bedcb9-562c-42b4-be71-c432b8518626 (Global Recovery Event) in 10 seconds
Jan 20 14:04:29 np0005589310 podman[97227]: 2026-01-20 19:04:29.510872718 +0000 UTC m=+0.207129871 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 9.1 KiB/s wr, 170 op/s
Jan 20 14:04:29 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:29 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:04:30 np0005589310 python3[97455]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:30 np0005589310 podman[97501]: 2026-01-20 19:04:30.493631566 +0000 UTC m=+0.040104442 container create 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:30 np0005589310 systemd[1]: Started libpod-conmon-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope.
Jan 20 14:04:30 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:30 np0005589310 podman[97501]: 2026-01-20 19:04:30.47611863 +0000 UTC m=+0.022591506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:30 np0005589310 podman[97501]: 2026-01-20 19:04:30.577699979 +0000 UTC m=+0.124172965 container init 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:04:30 np0005589310 podman[97501]: 2026-01-20 19:04:30.583789433 +0000 UTC m=+0.130262309 container start 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:30 np0005589310 podman[97501]: 2026-01-20 19:04:30.587170153 +0000 UTC m=+0.133643129 container attach 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.732782415 +0000 UTC m=+0.040342897 container create b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:30 np0005589310 systemd[1]: Started libpod-conmon-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope.
Jan 20 14:04:30 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.802466827 +0000 UTC m=+0.110027309 container init b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.807095597 +0000 UTC m=+0.114656079 container start b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.712218968 +0000 UTC m=+0.019779460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.810507398 +0000 UTC m=+0.118067880 container attach b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:04:30 np0005589310 eloquent_jemison[97572]: 167 167
Jan 20 14:04:30 np0005589310 systemd[1]: libpod-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope: Deactivated successfully.
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.812379792 +0000 UTC m=+0.119940274 container died b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:04:30 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0e4724f028626e07355c8f70411b39be66b5f930023c34356c61d4edd8f85787-merged.mount: Deactivated successfully.
Jan 20 14:04:30 np0005589310 podman[97546]: 2026-01-20 19:04:30.843774227 +0000 UTC m=+0.151334709 container remove b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:04:30 np0005589310 systemd[1]: libpod-conmon-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope: Deactivated successfully.
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 20 14:04:30 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136249076' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 20 14:04:30 np0005589310 happy_jones[97521]: mimic
Jan 20 14:04:30 np0005589310 podman[97596]: 2026-01-20 19:04:30.987299388 +0000 UTC m=+0.039655500 container create 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 14:04:30 np0005589310 systemd[1]: libpod-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope: Deactivated successfully.
Jan 20 14:04:31 np0005589310 systemd[1]: Started libpod-conmon-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope.
Jan 20 14:04:31 np0005589310 podman[97612]: 2026-01-20 19:04:31.04683469 +0000 UTC m=+0.037724165 container died 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:31 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:30.967256674 +0000 UTC m=+0.019612796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:31 np0005589310 systemd[1]: var-lib-containers-storage-overlay-14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63-merged.mount: Deactivated successfully.
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:31.079711859 +0000 UTC m=+0.132067971 container init 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:31.085454915 +0000 UTC m=+0.137811027 container start 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:31.089199124 +0000 UTC m=+0.141555236 container attach 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:31 np0005589310 podman[97612]: 2026-01-20 19:04:31.093417354 +0000 UTC m=+0.084306799 container remove 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:31 np0005589310 systemd[1]: libpod-conmon-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope: Deactivated successfully.
Jan 20 14:04:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:04:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:04:31
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:04:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 7.8 KiB/s wr, 146 op/s
Jan 20 14:04:31 np0005589310 wizardly_galois[97625]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:04:31 np0005589310 wizardly_galois[97625]: --> All data devices are unavailable
Jan 20 14:04:31 np0005589310 systemd[1]: libpod-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope: Deactivated successfully.
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:31.613560686 +0000 UTC m=+0.665916838 container died 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:04:31 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905-merged.mount: Deactivated successfully.
Jan 20 14:04:31 np0005589310 podman[97596]: 2026-01-20 19:04:31.67109883 +0000 UTC m=+0.723454982 container remove 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:04:31 np0005589310 systemd[1]: libpod-conmon-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope: Deactivated successfully.
Jan 20 14:04:32 np0005589310 python3[97736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.247193866 +0000 UTC m=+0.053638682 container create 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.282955405 +0000 UTC m=+0.068884465 container create 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:32 np0005589310 systemd[1]: Started libpod-conmon-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope.
Jan 20 14:04:32 np0005589310 systemd[1]: Started libpod-conmon-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope.
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.215088215 +0000 UTC m=+0.021533051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:04:32 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:32 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.349115263 +0000 UTC m=+0.155560099 container init 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.261162128 +0000 UTC m=+0.047091228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.353796523 +0000 UTC m=+0.139725603 container init 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.3574251 +0000 UTC m=+0.163869916 container start 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.358985347 +0000 UTC m=+0.144914407 container start 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.360566524 +0000 UTC m=+0.167011350 container attach 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:32 np0005589310 systemd[1]: libpod-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope: Deactivated successfully.
Jan 20 14:04:32 np0005589310 magical_lalande[97783]: 167 167
Jan 20 14:04:32 np0005589310 conmon[97783]: conmon 7b08734b1003880d5d5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope/container/memory.events
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.365182373 +0000 UTC m=+0.151111433 container attach 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.365500302 +0000 UTC m=+0.151429362 container died 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:32 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a498b444b816cd71ff8142c1c6fc3132cd678eee18db97ed11aff1951bdcfa6f-merged.mount: Deactivated successfully.
Jan 20 14:04:32 np0005589310 podman[97751]: 2026-01-20 19:04:32.402104119 +0000 UTC m=+0.188033179 container remove 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:04:32 np0005589310 systemd[1]: libpod-conmon-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope: Deactivated successfully.
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.558272311 +0000 UTC m=+0.048068320 container create 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:32 np0005589310 systemd[1]: Started libpod-conmon-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope.
Jan 20 14:04:32 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.536870144 +0000 UTC m=+0.026666243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.639076697 +0000 UTC m=+0.128872726 container init 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.647917926 +0000 UTC m=+0.137713935 container start 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.651116173 +0000 UTC m=+0.140912182 container attach 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:32 np0005589310 competent_bell[97845]: {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    "0": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "devices": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "/dev/loop3"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            ],
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_name": "ceph_lv0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_size": "21470642176",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "name": "ceph_lv0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "tags": {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.crush_device_class": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.encrypted": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_id": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.vdo": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.with_tpm": "0"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            },
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "vg_name": "ceph_vg0"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        }
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    ],
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    "1": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "devices": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "/dev/loop4"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            ],
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_name": "ceph_lv1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_size": "21470642176",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "name": "ceph_lv1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "tags": {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.crush_device_class": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.encrypted": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_id": "1",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.vdo": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.with_tpm": "0"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            },
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "vg_name": "ceph_vg1"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        }
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    ],
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    "2": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "devices": [
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "/dev/loop5"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            ],
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_name": "ceph_lv2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_size": "21470642176",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "name": "ceph_lv2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "tags": {
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.cluster_name": "ceph",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.crush_device_class": "",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.encrypted": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.objectstore": "bluestore",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osd_id": "2",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.vdo": "0",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:                "ceph.with_tpm": "0"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            },
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "type": "block",
Jan 20 14:04:32 np0005589310 competent_bell[97845]:            "vg_name": "ceph_vg2"
Jan 20 14:04:32 np0005589310 competent_bell[97845]:        }
Jan 20 14:04:32 np0005589310 competent_bell[97845]:    ]
Jan 20 14:04:32 np0005589310 competent_bell[97845]: }
Jan 20 14:04:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 20 14:04:32 np0005589310 strange_meitner[97781]: 
Jan 20 14:04:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206978851' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 20 14:04:32 np0005589310 systemd[1]: libpod-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope: Deactivated successfully.
Jan 20 14:04:32 np0005589310 podman[97828]: 2026-01-20 19:04:32.961181523 +0000 UTC m=+0.450977532 container died 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:32 np0005589310 systemd[1]: libpod-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope: Deactivated successfully.
Jan 20 14:04:32 np0005589310 strange_meitner[97781]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 20 14:04:32 np0005589310 podman[97750]: 2026-01-20 19:04:32.971716463 +0000 UTC m=+0.778161299 container died 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:04:32 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457-merged.mount: Deactivated successfully.
Jan 20 14:04:33 np0005589310 systemd[1]: var-lib-containers-storage-overlay-56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed-merged.mount: Deactivated successfully.
Jan 20 14:04:33 np0005589310 podman[97750]: 2026-01-20 19:04:33.026205224 +0000 UTC m=+0.832650040 container remove 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:33 np0005589310 systemd[1]: libpod-conmon-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope: Deactivated successfully.
Jan 20 14:04:33 np0005589310 podman[97828]: 2026-01-20 19:04:33.047781956 +0000 UTC m=+0.537577965 container remove 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:04:33 np0005589310 systemd[1]: libpod-conmon-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope: Deactivated successfully.
Jan 20 14:04:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 7.2 KiB/s wr, 134 op/s
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.591981707 +0000 UTC m=+0.058742963 container create 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:04:33 np0005589310 systemd[1]: Started libpod-conmon-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope.
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.568314676 +0000 UTC m=+0.035075932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:33 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.683268431 +0000 UTC m=+0.150029707 container init 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.691590658 +0000 UTC m=+0.158351884 container start 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.695668105 +0000 UTC m=+0.162429511 container attach 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:33 np0005589310 hopeful_booth[97957]: 167 167
Jan 20 14:04:33 np0005589310 systemd[1]: libpod-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope: Deactivated successfully.
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.698710387 +0000 UTC m=+0.165471713 container died 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:04:33 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a727be090044c58a609acb2410840f687c2670e1ed0110828291eb655ae46a73-merged.mount: Deactivated successfully.
Jan 20 14:04:33 np0005589310 podman[97940]: 2026-01-20 19:04:33.740976839 +0000 UTC m=+0.207738085 container remove 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:04:33 np0005589310 systemd[1]: libpod-conmon-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope: Deactivated successfully.
Jan 20 14:04:33 np0005589310 podman[97981]: 2026-01-20 19:04:33.933643707 +0000 UTC m=+0.051542413 container create 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:33 np0005589310 systemd[1]: Started libpod-conmon-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope.
Jan 20 14:04:33 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:04:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:33.910525218 +0000 UTC m=+0.028423954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:04:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:34 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:34.02193042 +0000 UTC m=+0.139829156 container init 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:34.031404133 +0000 UTC m=+0.149302839 container start 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:34.035550042 +0000 UTC m=+0.153448768 container attach 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.347116474187133e-07 of space, bias 4.0, pg target 0.000761653976902456 quantized to 16 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 14:04:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:04:34 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 6 completed events
Jan 20 14:04:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:04:34 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:34 np0005589310 lvm[98076]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:04:34 np0005589310 lvm[98073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:04:34 np0005589310 lvm[98076]: VG ceph_vg1 finished
Jan 20 14:04:34 np0005589310 lvm[98073]: VG ceph_vg0 finished
Jan 20 14:04:34 np0005589310 lvm[98078]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:04:34 np0005589310 lvm[98078]: VG ceph_vg2 finished
Jan 20 14:04:34 np0005589310 strange_cohen[97997]: {}
Jan 20 14:04:34 np0005589310 systemd[1]: libpod-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Deactivated successfully.
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:34.9225554 +0000 UTC m=+1.040454106 container died 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:04:34 np0005589310 systemd[1]: libpod-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Consumed 1.386s CPU time.
Jan 20 14:04:34 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:04:34 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9-merged.mount: Deactivated successfully.
Jan 20 14:04:34 np0005589310 podman[97981]: 2026-01-20 19:04:34.973635201 +0000 UTC m=+1.091533907 container remove 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 14:04:34 np0005589310 systemd[1]: libpod-conmon-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Deactivated successfully.
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 20 14:04:35 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 20 14:04:36 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.294141769s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 64.257011414s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:36 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.294141769s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 64.257011414s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:36 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 20 14:04:37 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=41/42 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v96: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:37 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:38 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 20 14:04:38 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 20 14:04:38 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 20 14:04:38 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=8.025634766s) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 74.880065918s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 20 14:04:38 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=8.025634766s) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 74.880065918s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 20 14:04:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:38 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=13.832959175s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active pruub 76.604873657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:38 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=13.832959175s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown pruub 76.604873657s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 20 14:04:39 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=43/44 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:39 np0005589310 ceph-mgr[75417]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Jan 20 14:04:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v99: 104 pgs: 1 peering, 93 unknown, 10 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 20 14:04:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 20 14:04:40 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 14:04:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=8.028918266s) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 66.017578125s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=8.028918266s) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 66.017578125s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 20 14:04:41 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 45 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=8.031952858s) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 39'38 active pruub 77.905471802s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.0( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=8.031952858s) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 0'0 unknown pruub 77.905471802s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v102: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 20 14:04:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.024555206s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 74.649314880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=31/32 n=6 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=12.445550919s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 32'5 active pruub 79.070503235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:42 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.024555206s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 74.649314880s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=12.445550919s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 unknown pruub 79.070503235s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:42 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 20 14:04:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 20 14:04:43 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 20 14:04:43 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 20 14:04:43 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v105: 212 pgs: 1 peering, 139 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 20 14:04:43 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] update: starting ev 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=33/34 n=210 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=12.445308685s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 81.086898804s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] complete: finished ev 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=12.445308685s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 81.086898804s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057880 space 0x5614da73d440 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db056000 space 0x5614db55cb40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3380 space 0x5614da35f740 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055d80 space 0x5614da73ae40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057f80 space 0x5614db731440 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057b80 space 0x5614da721440 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139880 space 0x5614db5b2540 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3b00 space 0x5614db446b40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139300 space 0x5614da6d3140 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3f80 space 0x5614db4c6b40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139f80 space 0x5614dc5bee40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139680 space 0x5614db537d40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079180 space 0x5614db72e540 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db13c180 space 0x5614dc5be540 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079800 space 0x5614db0a6e40 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3300 space 0x5614da688b40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db110180 space 0x5614db72f140 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a680 space 0x5614da91c540 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db131c80 space 0x5614db5fb140 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139a00 space 0x5614db5b3d40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2580 space 0x5614db4d8e40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2c80 space 0x5614da689440 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139180 space 0x5614da751440 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3680 space 0x5614da688240 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db078900 space 0x5614db534840 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139c00 space 0x5614da6d2840 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055a80 space 0x5614da745740 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11bb00 space 0x5614da91d740 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a500 space 0x5614db730840 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12af00 space 0x5614da35e840 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079600 space 0x5614db0a7a40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2280 space 0x5614db447440 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139100 space 0x5614db72f740 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079780 space 0x5614db55ae40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da2c3880 space 0x5614db5b3740 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db056180 space 0x5614da7a8240 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3d80 space 0x5614db4c6240 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db131980 space 0x5614da96e840 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3d00 space 0x5614da720b40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057600 space 0x5614da68c240 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da2c1900 space 0x5614db444540 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3880 space 0x5614db4c7440 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057980 space 0x5614da73cb40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db138200 space 0x5614da750b40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db110f80 space 0x5614da784e40 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057900 space 0x5614db55d440 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079680 space 0x5614da7a9d40 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11af00 space 0x5614db536e40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da92de00 space 0x5614da73a240 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a480 space 0x5614db4d9a40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0d2080 space 0x5614da720540 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2780 space 0x5614db4d8540 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12ae00 space 0x5614da744240 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11b000 space 0x5614da6d3a40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055700 space 0x5614da726240 0x0~9a clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2080 space 0x5614db447d40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3b80 space 0x5614db5b2e40 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057580 space 0x5614db55e840 0x0~98 clean)
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a200 space 0x5614dc5bf740 0x0~6e clean)
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:44 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 16 completed events
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:04:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 20 14:04:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 20 14:04:44 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=35/36 n=9 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=13.920217514s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 76.470893860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:44 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=13.920217514s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 76.470893860s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 14:04:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 20 14:04:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 14:04:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 20 14:04:46 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 20 14:04:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=12.700772285s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active pruub 85.138977051s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=12.700772285s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown pruub 85.138977051s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 20 14:04:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 20 14:04:48 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 20 14:04:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 20 14:04:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:49 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 20 14:04:49 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 20 14:04:49 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 20 14:04:49 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 20 14:04:50 np0005589310 systemd[76564]: Starting Mark boot as successful...
Jan 20 14:04:50 np0005589310 systemd[76564]: Finished Mark boot as successful.
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 20 14:04:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764896393s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.867973328s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.768498421s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871620178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.787466049s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.890731812s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.768334389s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871620178s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.787409782s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.890731812s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.767777443s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871459961s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.767750740s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871459961s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764649391s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.867973328s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.784059525s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.890701294s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.784017563s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.890701294s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764866829s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871673584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764767647s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871673584s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764997482s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871994019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764970779s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871994019s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766935349s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874229431s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766908646s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874229431s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764303207s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871833801s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764279366s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871833801s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786118507s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.893867493s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786084175s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.893867493s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786574364s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.893852234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785934448s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.893852234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763784409s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871849060s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763747215s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871849060s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766010284s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874153137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785912514s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894157410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765964508s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874153137s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785885811s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785758972s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894165039s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765930176s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874359131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763663292s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872108459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785725594s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894165039s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763641357s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872108459s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765893936s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874359131s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785747528s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894439697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785728455s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894439697s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763573647s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872352600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763344765s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872116089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785593987s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894432068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785578728s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894432068s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763516426s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872352600s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763413429s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872367859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763390541s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872367859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763951302s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.873092651s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763233185s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872367859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763247490s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872116089s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763934135s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.873092651s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763196945s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872367859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764789581s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874176025s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763873100s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.873283386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764762878s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874176025s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763821602s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.873283386s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764575958s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874061584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764553070s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874061584s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764651299s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874244690s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764633179s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874244690s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.789543152s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.032371521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.789494514s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.032371521s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964189529s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.827888489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758675575s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.004310608s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964165688s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.827888489s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780668259s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644523621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.762928963s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009040833s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.762892723s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009040833s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736981392s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983207703s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785997391s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.032333374s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785974503s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.032333374s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.790488243s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037017822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736595154s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983146667s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736571312s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983146667s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785683632s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.032432556s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785655975s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.032432556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736198425s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983116150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736177444s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983116150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761826515s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008903503s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761796951s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008903503s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.735393524s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983123779s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.735364914s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983123779s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761089325s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009010315s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761060715s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009010315s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760754585s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008918762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760728836s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008918762s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734626770s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983070374s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734597206s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983070374s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760367393s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008987427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760339737s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008987427s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780646324s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644523621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788419724s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037208557s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788397789s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037208557s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758645058s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.004310608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780597687s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644500732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734041214s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983276367s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759681702s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008995056s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759655952s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008995056s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733540535s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983055115s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733518600s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983055115s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787724495s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037277222s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787688255s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037277222s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759292603s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008972168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759272575s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008972168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787429810s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037322998s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787406921s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733282089s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983276367s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787016869s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037391663s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786974907s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037391663s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.732527733s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983039856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.732491493s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983039856s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758346558s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009094238s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758312225s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009094238s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786029816s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037376404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785999298s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037376404s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731573105s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983001709s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731555939s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983016968s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731534958s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983001709s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731522560s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983016968s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758448601s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010025024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758414268s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010025024s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785807610s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037528992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785775185s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037528992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736899376s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983207703s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757900238s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009811401s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757874489s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009811401s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.790459633s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037017822s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785063744s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037483215s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730375290s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982833862s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730344772s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982833862s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785021782s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037483215s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730322838s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982826233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730132103s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982826233s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757175446s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009902954s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757140160s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009902954s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780577660s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644500732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785690308s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.649703979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785670280s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.649703979s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784358025s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.037574768s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784296036s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.037574768s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756456375s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009887695s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784154892s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037628174s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729422569s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982917786s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756424904s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009887695s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784133911s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037628174s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729400635s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982917786s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729257584s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982788086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729220390s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982788086s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756373405s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010078430s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756349564s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010078430s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728921890s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982780457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756165504s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010063171s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756142616s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010063171s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728890419s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982780457s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788300514s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.042434692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757574081s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728663445s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982841492s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757555008s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011749268s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728637695s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982841492s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787322998s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041717529s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728191376s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982749939s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788058281s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.042434692s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787138939s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041717529s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728160858s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982749939s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787509918s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.042221069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727574348s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982307434s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727550507s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982307434s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787473679s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.042221069s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786794662s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041732788s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786738396s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041732788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786662102s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041809082s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786642075s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041809082s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727453232s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982788086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727432251s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982788086s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756210327s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011566162s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727087021s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982635498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786172867s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041740417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727060318s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982635498s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756165504s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011566162s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786118507s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041740417s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786162376s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041816711s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786121368s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041816711s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755785942s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011741638s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785891533s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041847229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.726333618s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982315063s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755758286s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011741638s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.726308823s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982315063s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785861015s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041847229s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755560875s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011718750s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755541801s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011718750s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.721095085s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.977561951s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786021233s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.042503357s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.721063614s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.977561951s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785986900s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.042503357s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.725831032s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982559204s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.725807190s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982559204s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.751092911s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615264893s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.752708435s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011756897s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.752679825s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011756897s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.751070023s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615264893s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780358315s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644607544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780264854s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644500732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746891975s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.611129761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780333519s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644607544s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966338158s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830680847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780205727s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644500732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966318130s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830680847s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.752981186s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617393494s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746796608s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.611129761s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.752961159s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617393494s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787708282s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652297974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787687302s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652297974s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966114998s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830795288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779447556s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644233704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965987206s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830795288s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779420853s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644233704s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779077530s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644195557s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.750434875s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615562439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779045105s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644195557s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.750388145s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615562439s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779064178s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644477844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779027939s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644477844s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787136078s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652626038s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965316772s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830879211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787094116s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652626038s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965293884s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830879211s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778452873s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644050598s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778429985s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644050598s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965420723s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831054688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965401649s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965251923s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831024170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749588966s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615371704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749567032s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615371704s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965233803s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831024170s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786496162s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652381897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778088570s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644058228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786456108s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652381897s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778070450s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644058228s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964951515s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830978394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964932442s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830978394s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777845383s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643966675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777825356s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643966675s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749189377s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615432739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786195755s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652458191s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786175728s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652458191s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964683533s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831100464s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777526855s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643943787s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964662552s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831100464s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777509689s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643943787s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748828888s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615470886s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748806953s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615470886s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777205467s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643989563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777182579s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643989563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776851654s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643798828s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776831627s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643798828s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964165688s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831153870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964144707s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831153870s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776750565s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643852234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748430252s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615554810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776727676s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643852234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748412132s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615554810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776565552s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643806458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785219193s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776546478s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643806458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785199165s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776668549s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643989563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776627541s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643989563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963844299s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748086929s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615570068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963767052s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748064041s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615570068s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784809113s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652488708s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784788132s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652488708s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775791168s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643661499s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775772095s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643661499s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747633934s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615554810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747612000s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615554810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784606934s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652587891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784584045s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652587891s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775033951s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643058777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775012970s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643058777s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748167038s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616256714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748147011s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616256714s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962833405s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831268311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962810516s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831268311s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962690353s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831321716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962669373s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831321716s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.774422646s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643180847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.774404526s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643180847s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788712502s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657707214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962219238s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788688660s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657707214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747247696s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615432739s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962195396s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773546219s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642776489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773426056s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642776489s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773351669s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642852783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773334503s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642852783s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773325920s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642921448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773024559s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642700195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773303032s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642921448s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773001671s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642700195s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772718430s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642684937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772699356s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642684937s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745589256s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615661621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745562553s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615661621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772440910s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642616272s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772425652s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642616272s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961070061s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831382751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961057663s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831382751s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745681763s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616119385s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745664597s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616119385s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772005081s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642562866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771986961s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642562866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786767960s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657432556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960700989s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831405640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786745071s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657432556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771851540s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642578125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960680962s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771830559s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642578125s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745315552s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616165161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960508347s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831352234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745295525s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616165161s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960484505s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831352234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771564484s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642524719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771542549s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642524719s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771504402s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642517090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786661148s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657684326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786640167s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657684326s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771484375s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642517090s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960370064s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831489563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960350037s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831489563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771395683s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642555237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746199608s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617378235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771377563s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642555237s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746182442s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617378235s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746048927s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617370605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786839485s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 active pruub 85.658187866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746029854s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617370605s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771099091s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642494202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786801338s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 85.658187866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771081924s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642494202s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960078239s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831542969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960051537s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831542969s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770812035s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642379761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770791054s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642379761s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959890366s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831542969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959867477s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831542969s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745687485s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617469788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770328522s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642173767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770303726s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642173767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745657921s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617469788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961028099s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833045959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961006165s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833045959s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745185852s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617462158s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745160103s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617462158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960451126s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.832916260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785326958s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657814026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785302162s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657814026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960332870s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.832969666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960268974s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.832916260s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960315704s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.832969666s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785018921s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657798767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784984589s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657798767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768787384s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641769409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768755913s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641769409s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784722328s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657867432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784701347s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657867432s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.769007683s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642257690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768991470s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642257690s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.744286537s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617675781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.744267464s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617675781s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959568024s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833099365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959545135s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833099365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758837700s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641807556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758797646s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641807556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758874893s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641998291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.743901253s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617546082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.774927139s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.658134460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734324455s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617546082s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.949831963s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833076477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.774906158s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.658134460s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.949810028s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833076477s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758847237s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641998291s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734236717s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617637634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734215736s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617637634s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.755042076s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.638595581s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.755023003s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.638595581s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758173943s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.641777039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758000374s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.641777039s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770358086s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758327484s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642242432s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:51 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.12( v 50'19 lc 39'17 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.14( v 50'19 lc 36'7 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 lc 36'4 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.9( v 50'19 lc 36'8 (0'0,50'19] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.15( v 50'19 lc 36'3 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:52 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636933327s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894454956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635606766s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894157410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635617256s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894279480s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635735512s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894470215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:53 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:53 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:53 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:54 np0005589310 ceph-mgr[75417]: [progress INFO root] Completed event 7d2393c7-bd7a-4697-bc54-3febf0a0d0e3 (Global Recovery Event) in 15 seconds
Jan 20 14:04:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 20 14:04:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 20 14:04:54 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080907822s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078788757s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080473900s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078704834s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080650330s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079368591s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:54 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 14:04:54 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079759598s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079399109s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:54 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:54 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 14:04:55 np0005589310 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded (PG_DEGRADED)
Jan 20 14:04:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 20 14:04:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 20 14:04:55 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=50'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053964615s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.078872681s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054486275s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079605103s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054390907s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079986572s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054670334s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 active pruub 94.079673767s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053636551s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079513550s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053482056s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079612732s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053318024s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079582214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053416252s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079689026s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053354263s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079818726s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053389549s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079895020s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053203583s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079658508s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053125381s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079887390s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:55 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 20 14:04:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 20 14:04:56 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 20 14:04:56 np0005589310 ceph-mon[75120]: Health check failed: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded (PG_DEGRADED)
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:56 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:04:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 14:04:58 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 20 14:04:58 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 20 14:04:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:04:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 20 14:04:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 20 14:04:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 20 14:04:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 20 14:04:59 np0005589310 ceph-mgr[75417]: [progress INFO root] Writing back 17 completed events
Jan 20 14:04:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 14:04:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:04:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 12 active+recovery_wait+remapped, 4 peering, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 82/249 objects misplaced (32.932%); 478 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 14:04:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 20 14:04:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 20 14:05:00 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded)
Jan 20 14:05:00 np0005589310 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 14:05:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:00 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 20 14:05:00 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 20 14:05:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 14:05:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 14:05:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 906 B/s, 17 objects/s recovering
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded)
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: Cluster is now healthy
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 14:05:01 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 14:05:02 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 14:05:02 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707896233s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897071838s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707865715s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897239685s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707477570s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897384644s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707047462s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897460938s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:02 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:02 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:02 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:02 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:02 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:03 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 20 14:05:03 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 540 B/s, 11 objects/s recovering
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.851051331s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894393921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.849040031s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894691467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 14:05:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 14:05:03 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:03 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 20 14:05:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 14:05:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 14:05:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 20 14:05:04 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 20 14:05:04 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:04 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:04 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 20 14:05:04 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 20 14:05:05 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 14:05:05 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 14:05:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 690 B/s, 1 keys/s, 14 objects/s recovering
Jan 20 14:05:06 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 20 14:05:06 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 20 14:05:06 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 20 14:05:06 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 20 14:05:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 20 14:05:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 20 14:05:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 14:05:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 20 14:05:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 20 14:05:07 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 20 14:05:07 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 20 14:05:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 20 14:05:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 20 14:05:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 284 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 14:05:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 14:05:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 14:05:09 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 14:05:09 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 14:05:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 20 14:05:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430742264s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897323608s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:10 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430679321s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897682190s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:10 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:10 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 20 14:05:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 20 14:05:10 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 14:05:11 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 20 14:05:11 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 20 14:05:11 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:11 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 14:05:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 14:05:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 20 14:05:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 14:05:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465168953s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463762283s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653617859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473500252s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467473984s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.658554077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:12 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:12 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:13 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 20 14:05:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 14:05:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457354546s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 active pruub 116.284957886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470577240s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298309326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470116615s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298522949s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469445229s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298500061s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:14 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:14 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:14 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:14 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 14:05:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 14:05:14 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:14 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:14 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:14 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 20 14:05:14 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 20 14:05:15 np0005589310 python3[98147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:15 np0005589310 podman[98148]: 2026-01-20 19:05:15.233803042 +0000 UTC m=+0.047108698 container create ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:05:15 np0005589310 systemd[1]: Started libpod-conmon-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope.
Jan 20 14:05:15 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:15 np0005589310 podman[98148]: 2026-01-20 19:05:15.215462048 +0000 UTC m=+0.028767714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:05:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:15 np0005589310 podman[98148]: 2026-01-20 19:05:15.327898013 +0000 UTC m=+0.141203739 container init ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:05:15 np0005589310 podman[98148]: 2026-01-20 19:05:15.33495205 +0000 UTC m=+0.148257696 container start ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:05:15 np0005589310 podman[98148]: 2026-01-20 19:05:15.339099518 +0000 UTC m=+0.152405164 container attach ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 20 14:05:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 20 14:05:15 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.003064156s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577781677s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002538681s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577713013s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001266479s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000796318s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 14:05:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:16 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 20 14:05:16 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 14:05:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 20 14:05:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 20 14:05:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:16 np0005589310 vigorous_ellis[98163]: could not fetch user info: no user info saved
Jan 20 14:05:16 np0005589310 systemd[1]: libpod-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope: Deactivated successfully.
Jan 20 14:05:16 np0005589310 podman[98148]: 2026-01-20 19:05:16.518722133 +0000 UTC m=+1.332027789 container died ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:16 np0005589310 systemd[1]: var-lib-containers-storage-overlay-cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0-merged.mount: Deactivated successfully.
Jan 20 14:05:16 np0005589310 podman[98148]: 2026-01-20 19:05:16.5620667 +0000 UTC m=+1.375372356 container remove ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:05:16 np0005589310 systemd[1]: libpod-conmon-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope: Deactivated successfully.
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 20 14:05:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 20 14:05:16 np0005589310 python3[98287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:16 np0005589310 podman[98288]: 2026-01-20 19:05:16.89783513 +0000 UTC m=+0.043159684 container create 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:05:16 np0005589310 systemd[1]: Started libpod-conmon-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope.
Jan 20 14:05:16 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:16 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:16 np0005589310 podman[98288]: 2026-01-20 19:05:16.879605348 +0000 UTC m=+0.024929922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 14:05:16 np0005589310 podman[98288]: 2026-01-20 19:05:16.977196622 +0000 UTC m=+0.122521196 container init 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:05:16 np0005589310 podman[98288]: 2026-01-20 19:05:16.983131243 +0000 UTC m=+0.128455797 container start 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 20 14:05:16 np0005589310 podman[98288]: 2026-01-20 19:05:16.986893782 +0000 UTC m=+0.132218356 container attach 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]: {
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "user_id": "openstack",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "display_name": "openstack",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "email": "",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "suspended": 0,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "max_buckets": 1000,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "subusers": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "keys": [
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        {
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:            "user": "openstack",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:            "access_key": "O6AWP42HJEVFMD2DU0GN",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:            "secret_key": "C5DBN8T35EW8FmXv62zVf5jg7zJ1IL2pEqBHcnxE",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:            "active": true,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:            "create_date": "2026-01-20T19:05:17.194920Z"
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        }
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    ],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "swift_keys": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "caps": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "op_mask": "read, write, delete",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "default_placement": "",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "default_storage_class": "",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "placement_tags": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "bucket_quota": {
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "enabled": false,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "check_on_raw": false,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_size": -1,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_size_kb": 0,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_objects": -1
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    },
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "user_quota": {
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "enabled": false,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "check_on_raw": false,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_size": -1,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_size_kb": 0,
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:        "max_objects": -1
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    },
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "temp_url_keys": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "type": "rgw",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "mfa_ids": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "account_id": "",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "path": "/",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "create_date": "2026-01-20T19:05:17.194402Z",
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "tags": [],
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]:    "group_ids": []
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]: }
Jan 20 14:05:17 np0005589310 vigilant_shamir[98303]: 
Jan 20 14:05:17 np0005589310 systemd[1]: libpod-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope: Deactivated successfully.
Jan 20 14:05:17 np0005589310 podman[98288]: 2026-01-20 19:05:17.231180613 +0000 UTC m=+0.376505177 container died 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612-merged.mount: Deactivated successfully.
Jan 20 14:05:17 np0005589310 podman[98288]: 2026-01-20 19:05:17.273828584 +0000 UTC m=+0.419153148 container remove 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:05:17 np0005589310 systemd[1]: libpod-conmon-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope: Deactivated successfully.
Jan 20 14:05:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 20 14:05:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 20 14:05:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956954956s) [2] async=[2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 active pruub 120.844100952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957309723s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 active pruub 120.844993591s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956269264s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 active pruub 120.844009399s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956018448s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 active pruub 120.844070435s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:17 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 20 14:05:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 20 14:05:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 20 14:05:18 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:18 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 20 14:05:18 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 20 14:05:19 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 14:05:19 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 14:05:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 31 op/s
Jan 20 14:05:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 20 14:05:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 20 14:05:21 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 20 14:05:21 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 20 14:05:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 14:05:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 14:05:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 14:05:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 14:05:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 20 14:05:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 20 14:05:22 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901213646s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 active pruub 126.894775391s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:22 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:22 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:22 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.466033936s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 active pruub 117.657966614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:22 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:22 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465965271s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 active pruub 117.658271790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:22 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:22 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:22 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 14:05:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 14:05:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 20 14:05:24 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190241814s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 active pruub 116.897994995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:24 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:24 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:24 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:24 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 14:05:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 14:05:24 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 20 14:05:24 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 14:05:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 20 14:05:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 20 14:05:25 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 20 14:05:25 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997115135s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 active pruub 124.712516785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:25 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:25 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993810654s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 active pruub 124.709548950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:25 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:25 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 20 14:05:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 20 14:05:26 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 20 14:05:26 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 20 14:05:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 20 14:05:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 20 14:05:26 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 20 14:05:26 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:26 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:27 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 20 14:05:27 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 20 14:05:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 20 14:05:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 20 14:05:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 20 14:05:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 20 14:05:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:29 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 20 14:05:29 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 20 14:05:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 20 14:05:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 20 14:05:30 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 20 14:05:30 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 20 14:05:30 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 20 14:05:30 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 20 14:05:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 20 14:05:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 20 14:05:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:05:31
Jan 20 14:05:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:05:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Some PGs (0.009836) are inactive; try again later
Jan 20 14:05:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 14:05:31 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 14:05:32 np0005589310 systemd-logind[797]: New session 34 of user zuul.
Jan 20 14:05:32 np0005589310 systemd[1]: Started Session 34 of User zuul.
Jan 20 14:05:32 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.373511314s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 active pruub 127.006271362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:32 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:32 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 20 14:05:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 20 14:05:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 20 14:05:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 14:05:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 14:05:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 20 14:05:32 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 20 14:05:32 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:33 np0005589310 python3.9[98555]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 20 14:05:33 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928627014s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 active pruub 132.051498413s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:33 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:33 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 14:05:33 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:05:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:05:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 20 14:05:34 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 20 14:05:34 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 20 14:05:34 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:34 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 14:05:34 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 14:05:35 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 14:05:35 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 14:05:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:35 np0005589310 python3.9[98836]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:05:35 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:05:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 20 14:05:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.224743131 +0000 UTC m=+0.047748958 container create af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:05:36 np0005589310 systemd[1]: Started libpod-conmon-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope.
Jan 20 14:05:36 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.29751185 +0000 UTC m=+0.120517697 container init af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.203992552 +0000 UTC m=+0.026998399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.30542938 +0000 UTC m=+0.128435207 container start af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.308917554 +0000 UTC m=+0.131923411 container attach af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:05:36 np0005589310 competent_moser[98945]: 167 167
Jan 20 14:05:36 np0005589310 systemd[1]: libpod-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope: Deactivated successfully.
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.314391866 +0000 UTC m=+0.137397703 container died af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:05:36 np0005589310 systemd[1]: var-lib-containers-storage-overlay-916886354eb1f52a99886afc516d1d208c3a98b294b1cda7361d954a3b199812-merged.mount: Deactivated successfully.
Jan 20 14:05:36 np0005589310 podman[98928]: 2026-01-20 19:05:36.354188492 +0000 UTC m=+0.177194319 container remove af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:05:36 np0005589310 systemd[1]: libpod-conmon-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope: Deactivated successfully.
Jan 20 14:05:36 np0005589310 podman[98968]: 2026-01-20 19:05:36.502435045 +0000 UTC m=+0.040924814 container create b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:05:36 np0005589310 systemd[1]: Started libpod-conmon-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope.
Jan 20 14:05:36 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:36 np0005589310 podman[98968]: 2026-01-20 19:05:36.486062261 +0000 UTC m=+0.024552010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:36 np0005589310 podman[98968]: 2026-01-20 19:05:36.592387047 +0000 UTC m=+0.130876826 container init b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:05:36 np0005589310 podman[98968]: 2026-01-20 19:05:36.600622815 +0000 UTC m=+0.139112564 container start b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:05:36 np0005589310 podman[98968]: 2026-01-20 19:05:36.604628111 +0000 UTC m=+0.143118150 container attach b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:05:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:36 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:05:37 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 20 14:05:37 np0005589310 zen_heisenberg[98985]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:05:37 np0005589310 zen_heisenberg[98985]: --> All data devices are unavailable
Jan 20 14:05:37 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 20 14:05:37 np0005589310 systemd[1]: libpod-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope: Deactivated successfully.
Jan 20 14:05:37 np0005589310 podman[98968]: 2026-01-20 19:05:37.131724041 +0000 UTC m=+0.670213830 container died b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:37 np0005589310 systemd[1]: var-lib-containers-storage-overlay-914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c-merged.mount: Deactivated successfully.
Jan 20 14:05:37 np0005589310 podman[98968]: 2026-01-20 19:05:37.191042326 +0000 UTC m=+0.729532085 container remove b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:05:37 np0005589310 systemd[1]: libpod-conmon-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope: Deactivated successfully.
Jan 20 14:05:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.680939751 +0000 UTC m=+0.071818917 container create ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:05:37 np0005589310 systemd[1]: Started libpod-conmon-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope.
Jan 20 14:05:37 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.66134566 +0000 UTC m=+0.052224856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.758427893 +0000 UTC m=+0.149307089 container init ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.767116031 +0000 UTC m=+0.157995207 container start ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:05:37 np0005589310 admiring_gould[99096]: 167 167
Jan 20 14:05:37 np0005589310 systemd[1]: libpod-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope: Deactivated successfully.
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.771420225 +0000 UTC m=+0.162299421 container attach ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.771971029 +0000 UTC m=+0.162850215 container died ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:05:37 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9888e5a3688b5c0ea8ee49ba80965201f1d51b00bb1a3db31c39fea0ab471b93-merged.mount: Deactivated successfully.
Jan 20 14:05:37 np0005589310 podman[99080]: 2026-01-20 19:05:37.855221289 +0000 UTC m=+0.246100465 container remove ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 14:05:37 np0005589310 systemd[1]: libpod-conmon-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope: Deactivated successfully.
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.031951187 +0000 UTC m=+0.053144748 container create ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:38 np0005589310 systemd[1]: Started libpod-conmon-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope.
Jan 20 14:05:38 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 20 14:05:38 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 20 14:05:38 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:38 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.009618631 +0000 UTC m=+0.030812202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.109975752 +0000 UTC m=+0.131169303 container init ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.116163261 +0000 UTC m=+0.137356812 container start ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.120111765 +0000 UTC m=+0.141305346 container attach ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:05:38 np0005589310 funny_joliot[99137]: {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    "0": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "devices": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "/dev/loop3"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            ],
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_name": "ceph_lv0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_size": "21470642176",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "name": "ceph_lv0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "tags": {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_name": "ceph",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.crush_device_class": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.encrypted": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.objectstore": "bluestore",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_id": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.vdo": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.with_tpm": "0"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            },
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "vg_name": "ceph_vg0"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        }
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    ],
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    "1": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "devices": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "/dev/loop4"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            ],
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_name": "ceph_lv1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_size": "21470642176",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "name": "ceph_lv1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "tags": {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_name": "ceph",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.crush_device_class": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.encrypted": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.objectstore": "bluestore",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_id": "1",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.vdo": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.with_tpm": "0"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            },
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "vg_name": "ceph_vg1"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        }
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    ],
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    "2": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "devices": [
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "/dev/loop5"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            ],
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_name": "ceph_lv2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_size": "21470642176",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "name": "ceph_lv2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "tags": {
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.cluster_name": "ceph",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.crush_device_class": "",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.encrypted": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.objectstore": "bluestore",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osd_id": "2",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.vdo": "0",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:                "ceph.with_tpm": "0"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            },
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "type": "block",
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:            "vg_name": "ceph_vg2"
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:        }
Jan 20 14:05:38 np0005589310 funny_joliot[99137]:    ]
Jan 20 14:05:38 np0005589310 funny_joliot[99137]: }
Jan 20 14:05:38 np0005589310 systemd[1]: libpod-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope: Deactivated successfully.
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.419061731 +0000 UTC m=+0.440255282 container died ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:05:38 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a-merged.mount: Deactivated successfully.
Jan 20 14:05:38 np0005589310 podman[99120]: 2026-01-20 19:05:38.468635032 +0000 UTC m=+0.489828593 container remove ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:05:38 np0005589310 systemd[1]: libpod-conmon-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope: Deactivated successfully.
Jan 20 14:05:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:38 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 20 14:05:38 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 20 14:05:38 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 20 14:05:38 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 20 14:05:38 np0005589310 podman[99221]: 2026-01-20 19:05:38.949013858 +0000 UTC m=+0.050006462 container create c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:05:38 np0005589310 systemd[1]: Started libpod-conmon-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope.
Jan 20 14:05:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:38.928237599 +0000 UTC m=+0.029230233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:39.025315442 +0000 UTC m=+0.126308056 container init c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:39.033298894 +0000 UTC m=+0.134291478 container start c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:39 np0005589310 ecstatic_moore[99238]: 167 167
Jan 20 14:05:39 np0005589310 systemd[1]: libpod-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope: Deactivated successfully.
Jan 20 14:05:39 np0005589310 conmon[99238]: conmon c6ae66f36e581213c8fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope/container/memory.events
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:39.037557947 +0000 UTC m=+0.138550561 container attach c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:39.037870574 +0000 UTC m=+0.138863158 container died c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:05:39 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a92ceaebfb7f9798902ee0618b90c37beb002b547bff214e60b19b722159ab45-merged.mount: Deactivated successfully.
Jan 20 14:05:39 np0005589310 podman[99221]: 2026-01-20 19:05:39.074804931 +0000 UTC m=+0.175797515 container remove c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:05:39 np0005589310 systemd[1]: libpod-conmon-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope: Deactivated successfully.
Jan 20 14:05:39 np0005589310 podman[99265]: 2026-01-20 19:05:39.21495569 +0000 UTC m=+0.039820828 container create 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:05:39 np0005589310 systemd[1]: Started libpod-conmon-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope.
Jan 20 14:05:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:05:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:05:39 np0005589310 podman[99265]: 2026-01-20 19:05:39.290018184 +0000 UTC m=+0.114883342 container init 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:05:39 np0005589310 podman[99265]: 2026-01-20 19:05:39.197111951 +0000 UTC m=+0.021977119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:05:39 np0005589310 podman[99265]: 2026-01-20 19:05:39.295304871 +0000 UTC m=+0.120170009 container start 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:05:39 np0005589310 podman[99265]: 2026-01-20 19:05:39.298264802 +0000 UTC m=+0.123129970 container attach 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 14:05:39 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 20 14:05:39 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 20 14:05:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 14:05:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 14:05:39 np0005589310 lvm[99367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:05:39 np0005589310 lvm[99368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:05:39 np0005589310 lvm[99368]: VG ceph_vg1 finished
Jan 20 14:05:39 np0005589310 lvm[99367]: VG ceph_vg0 finished
Jan 20 14:05:39 np0005589310 lvm[99370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:05:39 np0005589310 lvm[99370]: VG ceph_vg2 finished
Jan 20 14:05:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481973648s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 active pruub 133.654129028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485882759s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 active pruub 133.658874512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:39 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:39 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:39 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:40 np0005589310 friendly_robinson[99283]: {}
Jan 20 14:05:40 np0005589310 systemd[1]: libpod-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Deactivated successfully.
Jan 20 14:05:40 np0005589310 podman[99265]: 2026-01-20 19:05:40.074290364 +0000 UTC m=+0.899155532 container died 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:05:40 np0005589310 systemd[1]: libpod-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Consumed 1.288s CPU time.
Jan 20 14:05:40 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85-merged.mount: Deactivated successfully.
Jan 20 14:05:40 np0005589310 podman[99265]: 2026-01-20 19:05:40.530909448 +0000 UTC m=+1.355774606 container remove 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:05:40 np0005589310 systemd[1]: libpod-conmon-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Deactivated successfully.
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 20 14:05:40 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 20 14:05:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:40 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:40 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:40 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:40 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:40 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 20 14:05:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 20 14:05:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 14:05:41 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 20 14:05:41 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 20 14:05:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 20 14:05:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 20 14:05:41 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 20 14:05:42 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 20 14:05:42 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.692206383s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 active pruub 142.686187744s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693123817s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 active pruub 142.688095093s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:42 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:42 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:42 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:42 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:42 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 20 14:05:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 20 14:05:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:43 np0005589310 systemd[1]: session-34.scope: Deactivated successfully.
Jan 20 14:05:43 np0005589310 systemd[1]: session-34.scope: Consumed 8.513s CPU time.
Jan 20 14:05:43 np0005589310 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Jan 20 14:05:43 np0005589310 systemd-logind[797]: Removed session 34.
Jan 20 14:05:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 20 14:05:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 20 14:05:43 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 20 14:05:43 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:43 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2905063468931614e-06 of space, bias 4.0, pg target 0.0015486076162717936 quantized to 16 (current 16)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:05:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:05:44 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 20 14:05:44 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 20 14:05:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 20 14:05:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 20 14:05:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 639 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 14:05:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 14:05:46 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 14:05:46 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 14:05:46 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.617008209s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 active pruub 147.753036499s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:46 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:46 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 20 14:05:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:05:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 14:05:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 20 14:05:46 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 20 14:05:46 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 20 14:05:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 20 14:05:47 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 20 14:05:47 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 20 14:05:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 643 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 14:05:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 14:05:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:05:48 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 14:05:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 355 B/s wr, 16 op/s; 84 B/s, 2 objects/s recovering
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 20 14:05:49 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 20 14:05:50 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 20 14:05:50 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 20 14:05:50 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120479584s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 active pruub 148.052017212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:50 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:50 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 20 14:05:50 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:05:50 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 14:05:50 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 20 14:05:50 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 20 14:05:50 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 20 14:05:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 20 14:05:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 20 14:05:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 20 14:05:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 20 14:05:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 20 14:05:52 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 14:05:52 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 14:05:52 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 20 14:05:52 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 20 14:05:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 14:05:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 20 14:05:52 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 20 14:05:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 20 14:05:53 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 14:05:53 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 14:05:53 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 20 14:05:54 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 14:05:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 0 objects/s recovering
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 20 14:05:55 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 20 14:05:56 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 14:05:57 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 14:05:57 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 14:05:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 20 14:05:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 20 14:05:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 20 14:05:57 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.109041214s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 active pruub 156.283859253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:57 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:57 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 20 14:05:57 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 14:05:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 14:05:58 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 20 14:05:58 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 20 14:05:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:05:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 20 14:05:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 20 14:05:58 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 20 14:05:58 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:58 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:05:58 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:05:58 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:05:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 14:05:59 np0005589310 systemd-logind[797]: New session 35 of user zuul.
Jan 20 14:05:59 np0005589310 systemd[1]: Started Session 35 of User zuul.
Jan 20 14:05:59 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 20 14:05:59 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 20 14:05:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 20 14:05:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 20 14:05:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 20 14:05:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 20 14:05:59 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:05:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 20 14:05:59 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 20 14:05:59 np0005589310 python3.9[99596]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 14:06:00 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 20 14:06:00 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 20 14:06:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 20 14:06:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 14:06:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 20 14:06:00 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 20 14:06:00 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003457069s) [2] async=[2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 active pruub 163.980422974s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:00 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:00 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:00 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:01 np0005589310 python3.9[99770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 20 14:06:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 14:06:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 20 14:06:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 20 14:06:02 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 14:06:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 20 14:06:02 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 20 14:06:02 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:02 np0005589310 python3.9[99926]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:06:02 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 20 14:06:02 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 20 14:06:03 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 20 14:06:03 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 20 14:06:03 np0005589310 python3.9[100079]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.439196587s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 active pruub 164.286026001s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:03 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 20 14:06:03 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:03 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:03 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 20 14:06:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 20 14:06:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 20 14:06:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 20 14:06:04 np0005589310 python3.9[100233]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:04 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:04 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 20 14:06:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 20 14:06:04 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 20 14:06:04 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861665726s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 active pruub 157.961791992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:04 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:04 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:04 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:04 np0005589310 python3.9[100385]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:06:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 14:06:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 20 14:06:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 20 14:06:05 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 14:06:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 20 14:06:05 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991423607s) [1] async=[1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 active pruub 168.984329224s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:05 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:05 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:05 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:05 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:05 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:05 np0005589310 python3.9[100535]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:06:05 np0005589310 network[100552]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:06:05 np0005589310 network[100553]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:06:05 np0005589310 network[100554]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:06:06 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 20 14:06:06 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 20 14:06:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 20 14:06:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 20 14:06:06 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 20 14:06:06 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:06 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 20 14:06:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 20 14:06:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 20 14:06:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 20 14:06:07 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 20 14:06:07 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993875504s) [0] async=[0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 active pruub 160.133941650s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:07 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:07 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:07 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 20 14:06:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 20 14:06:08 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 20 14:06:08 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:10 np0005589310 python3.9[100815]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:06:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 20 14:06:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 20 14:06:11 np0005589310 python3.9[100965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 20 14:06:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 20 14:06:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 20 14:06:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 20 14:06:12 np0005589310 python3.9[101119]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:06:12 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 14:06:13 np0005589310 python3.9[101277]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:06:13 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 20 14:06:13 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 20 14:06:13 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 20 14:06:13 np0005589310 python3.9[101361]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:06:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 14:06:15 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 20 14:06:15 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 20 14:06:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 292 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 20 14:06:15 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 20 14:06:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961336136s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 active pruub 173.299713135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:15 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:15 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:16 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 20 14:06:16 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 20 14:06:16 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 14:06:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 20 14:06:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 20 14:06:16 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 20 14:06:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:16 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:16 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 20 14:06:17 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 20 14:06:18 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 20 14:06:18 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 20 14:06:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:18 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 20 14:06:18 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 14:06:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 20 14:06:18 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 20 14:06:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:18 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737841606s) [2] async=[2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 active pruub 183.126052856s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:18 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:19 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 20 14:06:19 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 20 14:06:19 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 20 14:06:19 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:20 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 20 14:06:20 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 20 14:06:20 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 14:06:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 20 14:06:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 20 14:06:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 20 14:06:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 14:06:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 14:06:22 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 20 14:06:22 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 20 14:06:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 20 14:06:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 20 14:06:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 14:06:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 20 14:06:22 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 20 14:06:22 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038787842s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 active pruub 169.396194458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:22 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:22 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 20 14:06:23 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:23 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:23 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 20 14:06:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 1 objects/s recovering
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 20 14:06:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 20 14:06:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 20 14:06:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 20 14:06:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 20 14:06:24 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 14:06:24 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 20 14:06:24 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 20 14:06:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 20 14:06:24 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:24 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 20 14:06:24 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 20 14:06:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978566170s) [0] async=[0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 active pruub 178.060546875s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:25 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 20 14:06:25 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:25 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 20 14:06:25 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 20 14:06:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 20 14:06:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 20 14:06:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 20 14:06:26 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 14:06:26 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 20 14:06:26 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 20 14:06:26 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 20 14:06:26 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:26 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705393791s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 active pruub 173.962097168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:26 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:26 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:27 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 20 14:06:27 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 20 14:06:27 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:27 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:27 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:27 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 14:06:27 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 20 14:06:28 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912906647s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 active pruub 176.032379150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:28 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 14:06:28 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:29 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 20 14:06:29 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 20 14:06:29 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 20 14:06:29 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 14:06:29 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901124954s) [0] async=[0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 active pruub 183.038589478s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:29 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:29 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:29 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 20 14:06:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 20 14:06:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 20 14:06:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 20 14:06:30 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 20 14:06:30 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 20 14:06:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 20 14:06:30 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 20 14:06:30 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 20 14:06:30 np0005589310 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:30 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:31 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 20 14:06:31 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 20 14:06:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:06:31
Jan 20 14:06:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:06:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Jan 20 14:06:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 20 14:06:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 20 14:06:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 20 14:06:31 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 20 14:06:31 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 20 14:06:31 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:31 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:06:31 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224273682s) [1] async=[1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 active pruub 184.419616699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:06:31 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:06:32 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 20 14:06:32 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 20 14:06:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 20 14:06:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 20 14:06:32 np0005589310 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 20 14:06:32 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:06:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 20 14:06:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 14:06:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:06:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:06:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Jan 20 14:06:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 20 14:06:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 20 14:06:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 20 14:06:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 20 14:06:37 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 14:06:37 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 14:06:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 291 B/s wr, 6 op/s; 47 B/s, 2 objects/s recovering
Jan 20 14:06:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 20 14:06:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 20 14:06:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 20 14:06:40 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 20 14:06:40 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 20 14:06:41 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 20 14:06:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 205 B/s wr, 4 op/s; 33 B/s, 1 objects/s recovering
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.75205893 +0000 UTC m=+0.040173468 container create 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:41 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:06:41 np0005589310 systemd[1]: Started libpod-conmon-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope.
Jan 20 14:06:41 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.734917547 +0000 UTC m=+0.023032115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.845820461 +0000 UTC m=+0.133935039 container init 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.854786293 +0000 UTC m=+0.142900841 container start 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.858486288 +0000 UTC m=+0.146600846 container attach 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:06:41 np0005589310 objective_chatelet[101663]: 167 167
Jan 20 14:06:41 np0005589310 systemd[1]: libpod-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope: Deactivated successfully.
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.861909327 +0000 UTC m=+0.150023865 container died 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:06:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7d943b9c3a501da3b14f2cac1f5614b5e10f6a7688972973242168ce25577f4b-merged.mount: Deactivated successfully.
Jan 20 14:06:41 np0005589310 podman[101647]: 2026-01-20 19:06:41.903495871 +0000 UTC m=+0.191610419 container remove 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:41 np0005589310 systemd[1]: libpod-conmon-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope: Deactivated successfully.
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.077575837 +0000 UTC m=+0.044728056 container create 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:06:42 np0005589310 systemd[1]: Started libpod-conmon-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope.
Jan 20 14:06:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.056422601 +0000 UTC m=+0.023574860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.161915795 +0000 UTC m=+0.129068014 container init 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.167234293 +0000 UTC m=+0.134386502 container start 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.17217911 +0000 UTC m=+0.139331319 container attach 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:42 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 20 14:06:42 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 20 14:06:42 np0005589310 charming_mirzakhani[101705]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:06:42 np0005589310 charming_mirzakhani[101705]: --> All data devices are unavailable
Jan 20 14:06:42 np0005589310 systemd[1]: libpod-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope: Deactivated successfully.
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.662122494 +0000 UTC m=+0.629274713 container died 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:06:42 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f-merged.mount: Deactivated successfully.
Jan 20 14:06:42 np0005589310 podman[101688]: 2026-01-20 19:06:42.703674237 +0000 UTC m=+0.670826446 container remove 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 14:06:42 np0005589310 systemd[1]: libpod-conmon-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope: Deactivated successfully.
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.118459579 +0000 UTC m=+0.041056251 container create 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:06:43 np0005589310 systemd[1]: Started libpod-conmon-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope.
Jan 20 14:06:43 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.099967012 +0000 UTC m=+0.022563684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.24627562 +0000 UTC m=+0.168872332 container init 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.253108327 +0000 UTC m=+0.175704999 container start 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 14:06:43 np0005589310 awesome_shannon[101817]: 167 167
Jan 20 14:06:43 np0005589310 systemd[1]: libpod-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope: Deactivated successfully.
Jan 20 14:06:43 np0005589310 conmon[101817]: conmon 4e1eb00a3925a24f2203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope/container/memory.events
Jan 20 14:06:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 20 14:06:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 20 14:06:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 186 B/s wr, 4 op/s; 30 B/s, 1 objects/s recovering
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.687708622 +0000 UTC m=+0.610305294 container attach 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.688148103 +0000 UTC m=+0.610744785 container died 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:06:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:43 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2fe398a2abb63578d6976c009c0407435bce0658561d2f9eb2de9763266c59de-merged.mount: Deactivated successfully.
Jan 20 14:06:43 np0005589310 podman[101801]: 2026-01-20 19:06:43.775293893 +0000 UTC m=+0.697890575 container remove 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:06:43 np0005589310 systemd[1]: libpod-conmon-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope: Deactivated successfully.
Jan 20 14:06:43 np0005589310 podman[101843]: 2026-01-20 19:06:43.994371801 +0000 UTC m=+0.109676624 container create 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:43.908932045 +0000 UTC m=+0.024236898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:44 np0005589310 systemd[1]: Started libpod-conmon-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope.
Jan 20 14:06:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 20 14:06:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 20 14:06:44 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:44 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:44 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:44 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:44 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:44.068835694 +0000 UTC m=+0.184140537 container init 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:44.07447298 +0000 UTC m=+0.189777823 container start 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:44.077754074 +0000 UTC m=+0.193058907 container attach 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]: {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    "0": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "devices": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "/dev/loop3"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            ],
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_name": "ceph_lv0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_size": "21470642176",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "name": "ceph_lv0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "tags": {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_name": "ceph",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.crush_device_class": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.encrypted": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.objectstore": "bluestore",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_id": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.vdo": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.with_tpm": "0"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            },
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "vg_name": "ceph_vg0"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        }
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    ],
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    "1": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "devices": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "/dev/loop4"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            ],
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_name": "ceph_lv1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_size": "21470642176",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "name": "ceph_lv1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "tags": {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_name": "ceph",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.crush_device_class": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.encrypted": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.objectstore": "bluestore",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_id": "1",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.vdo": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.with_tpm": "0"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            },
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "vg_name": "ceph_vg1"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        }
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    ],
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    "2": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "devices": [
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "/dev/loop5"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            ],
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_name": "ceph_lv2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_size": "21470642176",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "name": "ceph_lv2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "tags": {
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.cluster_name": "ceph",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.crush_device_class": "",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.encrypted": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.objectstore": "bluestore",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osd_id": "2",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.vdo": "0",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:                "ceph.with_tpm": "0"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            },
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "type": "block",
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:            "vg_name": "ceph_vg2"
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:        }
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]:    ]
Jan 20 14:06:44 np0005589310 quizzical_franklin[101860]: }
Jan 20 14:06:44 np0005589310 systemd[1]: libpod-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope: Deactivated successfully.
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:44.360772283 +0000 UTC m=+0.476077126 container died 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:44 np0005589310 systemd[1]: var-lib-containers-storage-overlay-62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64-merged.mount: Deactivated successfully.
Jan 20 14:06:44 np0005589310 podman[101843]: 2026-01-20 19:06:44.417178681 +0000 UTC m=+0.532483514 container remove 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:06:44 np0005589310 systemd[1]: libpod-conmon-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope: Deactivated successfully.
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:06:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:06:44 np0005589310 podman[101943]: 2026-01-20 19:06:44.916058234 +0000 UTC m=+0.075667895 container create 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:06:44 np0005589310 systemd[1]: Started libpod-conmon-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope.
Jan 20 14:06:44 np0005589310 podman[101943]: 2026-01-20 19:06:44.869072401 +0000 UTC m=+0.028682082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:44 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:44 np0005589310 podman[101943]: 2026-01-20 19:06:44.994849889 +0000 UTC m=+0.154459580 container init 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:06:45 np0005589310 podman[101943]: 2026-01-20 19:06:45.000636448 +0000 UTC m=+0.160246119 container start 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:06:45 np0005589310 amazing_kalam[101959]: 167 167
Jan 20 14:06:45 np0005589310 systemd[1]: libpod-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope: Deactivated successfully.
Jan 20 14:06:45 np0005589310 podman[101943]: 2026-01-20 19:06:45.006647214 +0000 UTC m=+0.166256895 container attach 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 14:06:45 np0005589310 podman[101943]: 2026-01-20 19:06:45.007341572 +0000 UTC m=+0.166951233 container died 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:06:45 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3de09d00de97397fc24e6ff1831e9759ca792175ce5ec1d9f526ca62960b5be6-merged.mount: Deactivated successfully.
Jan 20 14:06:45 np0005589310 podman[101943]: 2026-01-20 19:06:45.061708626 +0000 UTC m=+0.221318287 container remove 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:06:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 20 14:06:45 np0005589310 systemd[1]: libpod-conmon-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope: Deactivated successfully.
Jan 20 14:06:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 20 14:06:45 np0005589310 podman[101982]: 2026-01-20 19:06:45.283950416 +0000 UTC m=+0.053789611 container create 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:06:45 np0005589310 systemd[1]: Started libpod-conmon-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope.
Jan 20 14:06:45 np0005589310 podman[101982]: 2026-01-20 19:06:45.262377438 +0000 UTC m=+0.032216633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:06:45 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:06:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:06:45 np0005589310 podman[101982]: 2026-01-20 19:06:45.403331269 +0000 UTC m=+0.173170464 container init 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:06:45 np0005589310 podman[101982]: 2026-01-20 19:06:45.411937291 +0000 UTC m=+0.181776456 container start 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:45 np0005589310 podman[101982]: 2026-01-20 19:06:45.415516314 +0000 UTC m=+0.185355529 container attach 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:06:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Jan 20 14:06:46 np0005589310 lvm[102075]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:06:46 np0005589310 lvm[102075]: VG ceph_vg0 finished
Jan 20 14:06:46 np0005589310 lvm[102078]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:06:46 np0005589310 lvm[102078]: VG ceph_vg1 finished
Jan 20 14:06:46 np0005589310 lvm[102080]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:06:46 np0005589310 lvm[102080]: VG ceph_vg2 finished
Jan 20 14:06:46 np0005589310 lvm[102081]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:06:46 np0005589310 lvm[102081]: VG ceph_vg1 finished
Jan 20 14:06:46 np0005589310 amazing_blackwell[101999]: {}
Jan 20 14:06:46 np0005589310 systemd[1]: libpod-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Deactivated successfully.
Jan 20 14:06:46 np0005589310 systemd[1]: libpod-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Consumed 1.372s CPU time.
Jan 20 14:06:46 np0005589310 podman[101982]: 2026-01-20 19:06:46.313065625 +0000 UTC m=+1.082904790 container died 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:06:46 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 14:06:46 np0005589310 systemd[1]: var-lib-containers-storage-overlay-fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9-merged.mount: Deactivated successfully.
Jan 20 14:06:46 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 14:06:46 np0005589310 podman[101982]: 2026-01-20 19:06:46.411293691 +0000 UTC m=+1.181132856 container remove 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:06:46 np0005589310 systemd[1]: libpod-conmon-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Deactivated successfully.
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:46 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:06:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 20 14:06:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 20 14:06:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:49 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 14:06:49 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 14:06:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:50 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 20 14:06:50 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 20 14:06:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:52 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 20 14:06:52 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 20 14:06:52 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 20 14:06:52 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 20 14:06:53 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 20 14:06:53 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 20 14:06:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:54 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 20 14:06:54 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 20 14:06:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:56 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 20 14:06:56 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 20 14:06:57 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 14:06:57 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 14:06:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 20 14:06:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 20 14:06:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:06:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:06:59 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 20 14:06:59 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 20 14:06:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 20 14:06:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 20 14:06:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 20 14:06:59 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 20 14:06:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:00 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 20 14:07:00 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 20 14:07:00 np0005589310 python3.9[102271]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 20 14:07:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 20 14:07:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:02 np0005589310 python3.9[102558]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 14:07:03 np0005589310 python3.9[102710]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 14:07:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:03 np0005589310 python3.9[102862]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:04 np0005589310 python3.9[103014]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 14:07:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 20 14:07:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 20 14:07:05 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 20 14:07:05 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 20 14:07:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:05 np0005589310 python3.9[103166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 20 14:07:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 20 14:07:06 np0005589310 python3.9[103318]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:06 np0005589310 python3.9[103396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:07:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:07 np0005589310 python3.9[103548]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:07 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 14:07:07 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 14:07:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 20 14:07:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 20 14:07:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:08 np0005589310 python3.9[103702]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 14:07:09 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 14:07:09 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 14:07:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 20 14:07:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 20 14:07:09 np0005589310 python3.9[103855]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 14:07:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 20 14:07:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 20 14:07:10 np0005589310 python3.9[104008]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:07:10 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 20 14:07:10 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 20 14:07:10 np0005589310 python3.9[104160]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 14:07:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:11 np0005589310 python3.9[104312]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:11 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 20 14:07:11 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 20 14:07:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 20 14:07:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 20 14:07:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:13 np0005589310 python3.9[104465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:14 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 14:07:14 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 14:07:14 np0005589310 python3.9[104619]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:14 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 20 14:07:14 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 20 14:07:14 np0005589310 python3.9[104697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:15 np0005589310 python3.9[104849]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:07:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:16 np0005589310 python3.9[104927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:07:16 np0005589310 python3.9[105079]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:17 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 20 14:07:17 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 20 14:07:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:17 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 20 14:07:17 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 20 14:07:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 20 14:07:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 20 14:07:18 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 20 14:07:18 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 20 14:07:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:18 np0005589310 python3.9[105230]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:19 np0005589310 python3.9[105382]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 14:07:20 np0005589310 python3.9[105532]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:21 np0005589310 python3.9[105684]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:21 np0005589310 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 14:07:21 np0005589310 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 14:07:21 np0005589310 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 14:07:21 np0005589310 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 14:07:22 np0005589310 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 14:07:22 np0005589310 python3.9[105846]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 14:07:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 20 14:07:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 20 14:07:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:24 np0005589310 python3.9[105998]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 20 14:07:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 20 14:07:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 20 14:07:25 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 20 14:07:25 np0005589310 python3.9[106152]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:07:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 20 14:07:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 20 14:07:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:25 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 20 14:07:25 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 20 14:07:26 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 20 14:07:26 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 20 14:07:26 np0005589310 systemd[1]: session-35.scope: Deactivated successfully.
Jan 20 14:07:26 np0005589310 systemd[1]: session-35.scope: Consumed 1min 6.760s CPU time.
Jan 20 14:07:26 np0005589310 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Jan 20 14:07:26 np0005589310 systemd-logind[797]: Removed session 35.
Jan 20 14:07:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:27 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 14:07:27 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 14:07:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 20 14:07:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 20 14:07:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 20 14:07:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:07:31
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:07:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 20 14:07:31 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 20 14:07:31 np0005589310 systemd-logind[797]: New session 36 of user zuul.
Jan 20 14:07:31 np0005589310 systemd[1]: Started Session 36 of User zuul.
Jan 20 14:07:32 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 20 14:07:32 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 20 14:07:32 np0005589310 python3.9[106333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:33 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 20 14:07:33 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 20 14:07:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:33 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 20 14:07:33 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 20 14:07:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:34 np0005589310 python3.9[106489]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:07:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:07:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 20 14:07:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 20 14:07:35 np0005589310 python3.9[106642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:07:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 20 14:07:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 20 14:07:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:35 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 20 14:07:35 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 20 14:07:35 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 20 14:07:35 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 20 14:07:35 np0005589310 python3.9[106726]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:07:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 20 14:07:36 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 20 14:07:36 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 20 14:07:36 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 20 14:07:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:37 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 20 14:07:37 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 20 14:07:37 np0005589310 python3.9[106879]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:38 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 20 14:07:38 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 20 14:07:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 20 14:07:39 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 20 14:07:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:39 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 20 14:07:39 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 20 14:07:40 np0005589310 python3.9[107032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:07:40 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 20 14:07:40 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 20 14:07:41 np0005589310 python3.9[107185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:41 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 20 14:07:41 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 20 14:07:42 np0005589310 python3.9[107337]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 14:07:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 20 14:07:42 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 20 14:07:43 np0005589310 python3.9[107487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:07:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 20 14:07:43 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 20 14:07:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:44 np0005589310 python3.9[107645]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:07:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:07:44 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 20 14:07:44 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 20 14:07:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 20 14:07:44 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 20 14:07:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:45 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 20 14:07:45 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 20 14:07:46 np0005589310 python3.9[107798]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 20 14:07:47 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 20 14:07:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.689533543 +0000 UTC m=+0.051331431 container create 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:07:47 np0005589310 systemd[76564]: Created slice User Background Tasks Slice.
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:47 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:07:47 np0005589310 systemd[76564]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 14:07:47 np0005589310 systemd[76564]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 14:07:47 np0005589310 python3.9[108216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 14:07:47 np0005589310 systemd[1]: Started libpod-conmon-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope.
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.666279268 +0000 UTC m=+0.028076986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:47 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.805399097 +0000 UTC m=+0.167196775 container init 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.813391972 +0000 UTC m=+0.175189650 container start 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.816734472 +0000 UTC m=+0.178532160 container attach 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:07:47 np0005589310 reverent_banach[108246]: 167 167
Jan 20 14:07:47 np0005589310 systemd[1]: libpod-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope: Deactivated successfully.
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.824281735 +0000 UTC m=+0.186079413 container died 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 14:07:47 np0005589310 systemd[1]: var-lib-containers-storage-overlay-10499866c0f0acfe4bb63cf09fa0917ee0e1d1138ca11a78c6c99064035baa41-merged.mount: Deactivated successfully.
Jan 20 14:07:47 np0005589310 podman[108229]: 2026-01-20 19:07:47.881461282 +0000 UTC m=+0.243258950 container remove 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:47 np0005589310 systemd[1]: libpod-conmon-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope: Deactivated successfully.
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.066528976 +0000 UTC m=+0.061546565 container create 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 20 14:07:48 np0005589310 systemd[1]: Started libpod-conmon-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope.
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.036662234 +0000 UTC m=+0.031679843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:48 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:48 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.228339376 +0000 UTC m=+0.223356955 container init 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.240575504 +0000 UTC m=+0.235593053 container start 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.244579402 +0000 UTC m=+0.239596971 container attach 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:07:48 np0005589310 python3.9[108440]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:48 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 20 14:07:48 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 20 14:07:48 np0005589310 frosty_varahamihira[108362]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:07:48 np0005589310 frosty_varahamihira[108362]: --> All data devices are unavailable
Jan 20 14:07:48 np0005589310 systemd[1]: libpod-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope: Deactivated successfully.
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.813052562 +0000 UTC m=+0.808070141 container died 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:07:48 np0005589310 systemd[1]: var-lib-containers-storage-overlay-681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780-merged.mount: Deactivated successfully.
Jan 20 14:07:48 np0005589310 podman[108341]: 2026-01-20 19:07:48.863756395 +0000 UTC m=+0.858773944 container remove 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 14:07:48 np0005589310 systemd[1]: libpod-conmon-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope: Deactivated successfully.
Jan 20 14:07:49 np0005589310 python3.9[108669]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.282331186 +0000 UTC m=+0.040647024 container create 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:07:49 np0005589310 systemd[1]: Started libpod-conmon-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope.
Jan 20 14:07:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.356675394 +0000 UTC m=+0.114991242 container init 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.264691752 +0000 UTC m=+0.023007590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.363377904 +0000 UTC m=+0.121693702 container start 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.366440106 +0000 UTC m=+0.124755954 container attach 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:07:49 np0005589310 admiring_goodall[108700]: 167 167
Jan 20 14:07:49 np0005589310 systemd[1]: libpod-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope: Deactivated successfully.
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.386547588 +0000 UTC m=+0.144863386 container died 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:07:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b68e802ab3ea2a5b833c2a0d6155d9ca873c14a3b9549c052ddbb9715daa91d6-merged.mount: Deactivated successfully.
Jan 20 14:07:49 np0005589310 podman[108683]: 2026-01-20 19:07:49.429550613 +0000 UTC m=+0.187866411 container remove 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:07:49 np0005589310 systemd[1]: libpod-conmon-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope: Deactivated successfully.
Jan 20 14:07:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:49 np0005589310 podman[108724]: 2026-01-20 19:07:49.660054709 +0000 UTC m=+0.077180296 container create b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:07:49 np0005589310 systemd[1]: Started libpod-conmon-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope.
Jan 20 14:07:49 np0005589310 podman[108724]: 2026-01-20 19:07:49.628606334 +0000 UTC m=+0.045731971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:49 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:49 np0005589310 podman[108724]: 2026-01-20 19:07:49.758725331 +0000 UTC m=+0.175850958 container init b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:07:49 np0005589310 podman[108724]: 2026-01-20 19:07:49.76540948 +0000 UTC m=+0.182535027 container start b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:07:49 np0005589310 podman[108724]: 2026-01-20 19:07:49.769680895 +0000 UTC m=+0.186806532 container attach b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]: {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    "0": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "devices": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "/dev/loop3"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            ],
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_name": "ceph_lv0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_size": "21470642176",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "name": "ceph_lv0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "tags": {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_name": "ceph",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.crush_device_class": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.encrypted": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.objectstore": "bluestore",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_id": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.vdo": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.with_tpm": "0"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            },
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "vg_name": "ceph_vg0"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        }
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    ],
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    "1": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "devices": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "/dev/loop4"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            ],
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_name": "ceph_lv1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_size": "21470642176",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "name": "ceph_lv1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "tags": {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_name": "ceph",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.crush_device_class": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.encrypted": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.objectstore": "bluestore",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_id": "1",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.vdo": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.with_tpm": "0"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            },
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "vg_name": "ceph_vg1"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        }
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    ],
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    "2": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "devices": [
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "/dev/loop5"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            ],
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_name": "ceph_lv2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_size": "21470642176",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "name": "ceph_lv2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "tags": {
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.cluster_name": "ceph",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.crush_device_class": "",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.encrypted": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.objectstore": "bluestore",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osd_id": "2",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.vdo": "0",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:                "ceph.with_tpm": "0"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            },
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "type": "block",
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:            "vg_name": "ceph_vg2"
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:        }
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]:    ]
Jan 20 14:07:50 np0005589310 pedantic_chandrasekhar[108740]: }
Jan 20 14:07:50 np0005589310 systemd[1]: libpod-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope: Deactivated successfully.
Jan 20 14:07:50 np0005589310 podman[108724]: 2026-01-20 19:07:50.060963905 +0000 UTC m=+0.478089472 container died b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:07:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156-merged.mount: Deactivated successfully.
Jan 20 14:07:50 np0005589310 podman[108724]: 2026-01-20 19:07:50.107253409 +0000 UTC m=+0.524378946 container remove b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:07:50 np0005589310 systemd[1]: libpod-conmon-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope: Deactivated successfully.
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.558591411 +0000 UTC m=+0.046972474 container create b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:07:50 np0005589310 systemd[1]: Started libpod-conmon-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope.
Jan 20 14:07:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.539386344 +0000 UTC m=+0.027767417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.63633332 +0000 UTC m=+0.124714403 container init b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.644455098 +0000 UTC m=+0.132836151 container start b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.647739276 +0000 UTC m=+0.136120329 container attach b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:07:50 np0005589310 gallant_bhaskara[108864]: 167 167
Jan 20 14:07:50 np0005589310 systemd[1]: libpod-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope: Deactivated successfully.
Jan 20 14:07:50 np0005589310 conmon[108864]: conmon b939e00b8435b790d46b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope/container/memory.events
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.650043609 +0000 UTC m=+0.138424672 container died b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:07:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b32ea5887c719c4f03b956ce62789ecc5651834a0055ec22c80fe830ce4983f3-merged.mount: Deactivated successfully.
Jan 20 14:07:50 np0005589310 podman[108824]: 2026-01-20 19:07:50.694580116 +0000 UTC m=+0.182961169 container remove b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:07:50 np0005589310 systemd[1]: libpod-conmon-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope: Deactivated successfully.
Jan 20 14:07:50 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 20 14:07:50 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 20 14:07:50 np0005589310 podman[108943]: 2026-01-20 19:07:50.862868749 +0000 UTC m=+0.040523640 container create 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:07:50 np0005589310 systemd[1]: Started libpod-conmon-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope.
Jan 20 14:07:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:07:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:07:50 np0005589310 podman[108943]: 2026-01-20 19:07:50.940058744 +0000 UTC m=+0.117713655 container init 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:07:50 np0005589310 podman[108943]: 2026-01-20 19:07:50.845086011 +0000 UTC m=+0.022740922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:07:50 np0005589310 podman[108943]: 2026-01-20 19:07:50.948026718 +0000 UTC m=+0.125681609 container start 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:07:50 np0005589310 podman[108943]: 2026-01-20 19:07:50.951864212 +0000 UTC m=+0.129519103 container attach 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 14:07:51 np0005589310 python3.9[109036]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:07:51 np0005589310 lvm[109109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:07:51 np0005589310 lvm[109112]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:07:51 np0005589310 lvm[109112]: VG ceph_vg1 finished
Jan 20 14:07:51 np0005589310 lvm[109109]: VG ceph_vg0 finished
Jan 20 14:07:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:51 np0005589310 lvm[109114]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:07:51 np0005589310 lvm[109114]: VG ceph_vg2 finished
Jan 20 14:07:51 np0005589310 focused_bell[108998]: {}
Jan 20 14:07:51 np0005589310 systemd[1]: libpod-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Deactivated successfully.
Jan 20 14:07:51 np0005589310 podman[108943]: 2026-01-20 19:07:51.758574505 +0000 UTC m=+0.936229396 container died 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:07:51 np0005589310 systemd[1]: libpod-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Consumed 1.297s CPU time.
Jan 20 14:07:51 np0005589310 systemd[1]: var-lib-containers-storage-overlay-67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3-merged.mount: Deactivated successfully.
Jan 20 14:07:51 np0005589310 podman[108943]: 2026-01-20 19:07:51.799923806 +0000 UTC m=+0.977578697 container remove 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:07:51 np0005589310 systemd[1]: libpod-conmon-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Deactivated successfully.
Jan 20 14:07:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:07:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:51 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:07:51 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 20 14:07:51 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 20 14:07:52 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 20 14:07:52 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 20 14:07:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:07:53 np0005589310 python3.9[109306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:07:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:53 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 20 14:07:53 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 20 14:07:54 np0005589310 python3.9[109460]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 20 14:07:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 20 14:07:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 20 14:07:54 np0005589310 systemd-logind[797]: Session 36 logged out. Waiting for processes to exit.
Jan 20 14:07:54 np0005589310 systemd[1]: session-36.scope: Deactivated successfully.
Jan 20 14:07:54 np0005589310 systemd[1]: session-36.scope: Consumed 17.940s CPU time.
Jan 20 14:07:54 np0005589310 systemd-logind[797]: Removed session 36.
Jan 20 14:07:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 20 14:07:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 20 14:07:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:07:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:07:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 20 14:07:59 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 20 14:08:00 np0005589310 systemd-logind[797]: New session 37 of user zuul.
Jan 20 14:08:00 np0005589310 systemd[1]: Started Session 37 of User zuul.
Jan 20 14:08:01 np0005589310 python3.9[109638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:08:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 20 14:08:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 20 14:08:02 np0005589310 python3.9[109792]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:08:02 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 20 14:08:02 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 20 14:08:02 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 20 14:08:02 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 20 14:08:03 np0005589310 python3.9[109985]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:08:03 np0005589310 systemd[1]: session-37.scope: Deactivated successfully.
Jan 20 14:08:03 np0005589310 systemd[1]: session-37.scope: Consumed 2.333s CPU time.
Jan 20 14:08:03 np0005589310 systemd-logind[797]: Session 37 logged out. Waiting for processes to exit.
Jan 20 14:08:03 np0005589310 systemd-logind[797]: Removed session 37.
Jan 20 14:08:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 20 14:08:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 20 14:08:05 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 20 14:08:05 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 20 14:08:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:08 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 20 14:08:08 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 20 14:08:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:08 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 14:08:08 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 14:08:09 np0005589310 systemd-logind[797]: New session 38 of user zuul.
Jan 20 14:08:09 np0005589310 systemd[1]: Started Session 38 of User zuul.
Jan 20 14:08:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 20 14:08:09 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 20 14:08:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:10 np0005589310 python3.9[110165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:08:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 20 14:08:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 20 14:08:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 20 14:08:10 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 20 14:08:11 np0005589310 python3.9[110319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:08:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 20 14:08:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 20 14:08:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:12 np0005589310 python3.9[110475]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:08:12 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 20 14:08:12 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 20 14:08:12 np0005589310 python3.9[110559]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:08:13 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 20 14:08:13 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 20 14:08:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 20 14:08:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 20 14:08:14 np0005589310 python3.9[110712]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:08:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:15 np0005589310 python3.9[110907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:16 np0005589310 python3.9[111059]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:08:17 np0005589310 python3.9[111224]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:17 np0005589310 python3.9[111302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:18 np0005589310 python3.9[111454]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:18 np0005589310 python3.9[111532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:20 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 14:08:20 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 14:08:20 np0005589310 python3.9[111684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:20 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 20 14:08:20 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 20 14:08:21 np0005589310 python3.9[111836]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:21 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 20 14:08:21 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 20 14:08:21 np0005589310 python3.9[111988]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 14:08:21 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 14:08:22 np0005589310 python3.9[112140]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:08:22 np0005589310 python3.9[112292]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:08:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:23 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 20 14:08:23 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 20 14:08:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 14:08:24 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 14:08:24 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 20 14:08:24 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 20 14:08:25 np0005589310 python3.9[112445]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:08:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:25 np0005589310 python3.9[112599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:08:26 np0005589310 python3.9[112751]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:08:26 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 20 14:08:26 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 20 14:08:27 np0005589310 python3.9[112903]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:08:27 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 20 14:08:27 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 20 14:08:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:27 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 14:08:27 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 14:08:27 np0005589310 python3.9[113056]: ansible-service_facts Invoked
Jan 20 14:08:27 np0005589310 network[113073]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:08:27 np0005589310 network[113074]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:08:27 np0005589310 network[113075]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:08:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 20 14:08:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 20 14:08:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 20 14:08:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 20 14:08:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:28 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 20 14:08:28 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 20 14:08:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 14:08:29 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 14:08:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 20 14:08:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 20 14:08:31 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 20 14:08:31 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:08:31
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:08:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 20 14:08:32 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 20 14:08:32 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 20 14:08:32 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 20 14:08:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:33 np0005589310 python3.9[113527]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:08:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:33 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 20 14:08:33 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:08:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:08:34 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 14:08:34 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 14:08:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:36 np0005589310 python3.9[113680]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 14:08:36 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 20 14:08:36 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 20 14:08:36 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 14:08:36 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 14:08:37 np0005589310 python3.9[113832]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:37 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 20 14:08:37 np0005589310 python3.9[113910]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:37 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 20 14:08:38 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 20 14:08:38 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 20 14:08:38 np0005589310 python3.9[114062]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:39 np0005589310 python3.9[114140]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:40 np0005589310 python3.9[114292]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:41 np0005589310 python3.9[114444]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:08:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:42 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 20 14:08:42 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 20 14:08:42 np0005589310 python3.9[114528]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:08:43 np0005589310 systemd-logind[797]: Session 38 logged out. Waiting for processes to exit.
Jan 20 14:08:43 np0005589310 systemd[1]: session-38.scope: Deactivated successfully.
Jan 20 14:08:43 np0005589310 systemd[1]: session-38.scope: Consumed 23.631s CPU time.
Jan 20 14:08:43 np0005589310 systemd-logind[797]: Removed session 38.
Jan 20 14:08:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:44 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 20 14:08:44 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:08:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:08:44 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 20 14:08:44 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 20 14:08:45 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 20 14:08:45 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 20 14:08:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.718306) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125718448, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7202, "num_deletes": 252, "total_data_size": 9917882, "memory_usage": 10108864, "flush_reason": "Manual Compaction"}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125771332, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7811563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7345, "table_properties": {"data_size": 7784868, "index_size": 17492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74852, "raw_average_key_size": 23, "raw_value_size": 7722475, "raw_average_value_size": 2388, "num_data_blocks": 768, "num_entries": 3233, "num_filter_entries": 3233, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935727, "oldest_key_time": 1768935727, "file_creation_time": 1768936125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 53138 microseconds, and 15029 cpu microseconds.
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.771453) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7811563 bytes OK
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.771479) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772898) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772916) EVENT_LOG_v1 {"time_micros": 1768936125772911, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9886596, prev total WAL file size 9886596, number of live WAL files 2.
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.776202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7628KB) 13(58KB) 8(1944B)]
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125776417, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7873467, "oldest_snapshot_seqno": -1}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3058 keys, 7826258 bytes, temperature: kUnknown
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125846712, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7826258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7799999, "index_size": 17509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73255, "raw_average_key_size": 23, "raw_value_size": 7738995, "raw_average_value_size": 2530, "num_data_blocks": 770, "num_entries": 3058, "num_filter_entries": 3058, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.847102) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7826258 bytes
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.848648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 111.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3348, records dropped: 290 output_compression: NoCompression
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.848669) EVENT_LOG_v1 {"time_micros": 1768936125848657, "job": 4, "event": "compaction_finished", "compaction_time_micros": 70456, "compaction_time_cpu_micros": 34083, "output_level": 6, "num_output_files": 1, "total_output_size": 7826258, "num_input_records": 3348, "num_output_records": 3058, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850362, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850468, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850528, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 20 14:08:45 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.775502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:08:46 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 20 14:08:46 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 20 14:08:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 20 14:08:47 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 20 14:08:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 20 14:08:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 20 14:08:48 np0005589310 systemd-logind[797]: New session 39 of user zuul.
Jan 20 14:08:48 np0005589310 systemd[1]: Started Session 39 of User zuul.
Jan 20 14:08:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:49 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 20 14:08:49 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 20 14:08:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:49 np0005589310 python3.9[114711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:50 np0005589310 python3.9[114863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:50 np0005589310 python3.9[114941]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:51 np0005589310 systemd[1]: session-39.scope: Deactivated successfully.
Jan 20 14:08:51 np0005589310 systemd[1]: session-39.scope: Consumed 1.703s CPU time.
Jan 20 14:08:51 np0005589310 systemd-logind[797]: Session 39 logged out. Waiting for processes to exit.
Jan 20 14:08:51 np0005589310 systemd-logind[797]: Removed session 39.
Jan 20 14:08:51 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 20 14:08:51 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 20 14:08:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:52 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.908075282 +0000 UTC m=+0.035480471 container create 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:08:52 np0005589310 systemd[1]: Started libpod-conmon-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope.
Jan 20 14:08:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.985445348 +0000 UTC m=+0.112850547 container init 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.892342741 +0000 UTC m=+0.019747950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.991028356 +0000 UTC m=+0.118433545 container start 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.994203025 +0000 UTC m=+0.121608234 container attach 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:08:52 np0005589310 compassionate_yonath[115124]: 167 167
Jan 20 14:08:52 np0005589310 systemd[1]: libpod-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope: Deactivated successfully.
Jan 20 14:08:52 np0005589310 conmon[115124]: conmon 314f597e5b010d0deca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope/container/memory.events
Jan 20 14:08:52 np0005589310 podman[115110]: 2026-01-20 19:08:52.996897002 +0000 UTC m=+0.124302191 container died 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:08:53 np0005589310 systemd[1]: var-lib-containers-storage-overlay-99d491242c2f4198afaa7d7f496904d02e2d7a1a3a450eb6e65f37df4b54b321-merged.mount: Deactivated successfully.
Jan 20 14:08:53 np0005589310 podman[115110]: 2026-01-20 19:08:53.042651865 +0000 UTC m=+0.170057064 container remove 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:08:53 np0005589310 systemd[1]: libpod-conmon-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope: Deactivated successfully.
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.224850379 +0000 UTC m=+0.051081677 container create 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 14:08:53 np0005589310 systemd[1]: Started libpod-conmon-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope.
Jan 20 14:08:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.197951983 +0000 UTC m=+0.024183371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.308497141 +0000 UTC m=+0.134728469 container init 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.314627603 +0000 UTC m=+0.140858901 container start 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.318446157 +0000 UTC m=+0.144677475 container attach 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:08:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:53 np0005589310 romantic_hoover[115168]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:08:53 np0005589310 romantic_hoover[115168]: --> All data devices are unavailable
Jan 20 14:08:53 np0005589310 systemd[1]: libpod-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope: Deactivated successfully.
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.756513589 +0000 UTC m=+0.582744907 container died 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:08:53 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b-merged.mount: Deactivated successfully.
Jan 20 14:08:53 np0005589310 podman[115150]: 2026-01-20 19:08:53.797038842 +0000 UTC m=+0.623270140 container remove 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:08:53 np0005589310 systemd[1]: libpod-conmon-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope: Deactivated successfully.
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.213276104 +0000 UTC m=+0.036396862 container create d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:08:54 np0005589310 systemd[1]: Started libpod-conmon-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope.
Jan 20 14:08:54 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.282617111 +0000 UTC m=+0.105737889 container init d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.288179689 +0000 UTC m=+0.111300447 container start d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.291802259 +0000 UTC m=+0.114923047 container attach d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.197917384 +0000 UTC m=+0.021038162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:54 np0005589310 systemd[1]: libpod-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope: Deactivated successfully.
Jan 20 14:08:54 np0005589310 elastic_ride[115281]: 167 167
Jan 20 14:08:54 np0005589310 conmon[115281]: conmon d93735f77f9d0d9e6870 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope/container/memory.events
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.29547951 +0000 UTC m=+0.118600278 container died d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:08:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-469a4a26e354903567fca2b579961077b7c44a60f8046641068720d8481781ab-merged.mount: Deactivated successfully.
Jan 20 14:08:54 np0005589310 podman[115265]: 2026-01-20 19:08:54.337881451 +0000 UTC m=+0.161002209 container remove d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:08:54 np0005589310 systemd[1]: libpod-conmon-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope: Deactivated successfully.
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.521969521 +0000 UTC m=+0.045835637 container create 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:08:54 np0005589310 systemd[1]: Started libpod-conmon-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope.
Jan 20 14:08:54 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:54 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.499017273 +0000 UTC m=+0.022883409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.603732827 +0000 UTC m=+0.127598983 container init 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.61155698 +0000 UTC m=+0.135423096 container start 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.615928068 +0000 UTC m=+0.139794224 container attach 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:08:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 20 14:08:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]: {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    "0": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "devices": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "/dev/loop3"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            ],
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_name": "ceph_lv0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_size": "21470642176",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "name": "ceph_lv0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "tags": {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_name": "ceph",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.crush_device_class": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.encrypted": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.objectstore": "bluestore",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_id": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.vdo": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.with_tpm": "0"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            },
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "vg_name": "ceph_vg0"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        }
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    ],
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    "1": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "devices": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "/dev/loop4"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            ],
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_name": "ceph_lv1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_size": "21470642176",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "name": "ceph_lv1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "tags": {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_name": "ceph",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.crush_device_class": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.encrypted": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.objectstore": "bluestore",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_id": "1",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.vdo": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.with_tpm": "0"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            },
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "vg_name": "ceph_vg1"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        }
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    ],
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    "2": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "devices": [
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "/dev/loop5"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            ],
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_name": "ceph_lv2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_size": "21470642176",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "name": "ceph_lv2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "tags": {
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.cluster_name": "ceph",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.crush_device_class": "",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.encrypted": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.objectstore": "bluestore",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osd_id": "2",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.vdo": "0",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:                "ceph.with_tpm": "0"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            },
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "type": "block",
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:            "vg_name": "ceph_vg2"
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:        }
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]:    ]
Jan 20 14:08:54 np0005589310 fervent_bhabha[115320]: }
Jan 20 14:08:54 np0005589310 systemd[1]: libpod-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope: Deactivated successfully.
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.919243972 +0000 UTC m=+0.443110078 container died 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:08:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38-merged.mount: Deactivated successfully.
Jan 20 14:08:54 np0005589310 podman[115303]: 2026-01-20 19:08:54.961084919 +0000 UTC m=+0.484951025 container remove 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:08:54 np0005589310 systemd[1]: libpod-conmon-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope: Deactivated successfully.
Jan 20 14:08:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 20 14:08:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.410734308 +0000 UTC m=+0.044553235 container create d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:08:55 np0005589310 systemd[1]: Started libpod-conmon-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope.
Jan 20 14:08:55 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.394193448 +0000 UTC m=+0.028012385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.490956275 +0000 UTC m=+0.124775212 container init d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.498115902 +0000 UTC m=+0.131934829 container start d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.502122141 +0000 UTC m=+0.135941058 container attach d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:08:55 np0005589310 trusting_lederberg[115419]: 167 167
Jan 20 14:08:55 np0005589310 systemd[1]: libpod-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope: Deactivated successfully.
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.50568101 +0000 UTC m=+0.139499927 container died d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:08:55 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c634c716127eff9cf60533083beab32003e71d56c1fe9525e5c4e5ff09126b0c-merged.mount: Deactivated successfully.
Jan 20 14:08:55 np0005589310 podman[115403]: 2026-01-20 19:08:55.546675885 +0000 UTC m=+0.180494792 container remove d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:08:55 np0005589310 systemd[1]: libpod-conmon-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope: Deactivated successfully.
Jan 20 14:08:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:55 np0005589310 podman[115444]: 2026-01-20 19:08:55.699202384 +0000 UTC m=+0.044801371 container create 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:08:55 np0005589310 systemd[1]: Started libpod-conmon-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope.
Jan 20 14:08:55 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:08:55 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:55 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:55 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:55 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:08:55 np0005589310 podman[115444]: 2026-01-20 19:08:55.777815961 +0000 UTC m=+0.123414968 container init 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:08:55 np0005589310 podman[115444]: 2026-01-20 19:08:55.68293574 +0000 UTC m=+0.028534747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:08:55 np0005589310 podman[115444]: 2026-01-20 19:08:55.785920942 +0000 UTC m=+0.131519929 container start 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:08:55 np0005589310 podman[115444]: 2026-01-20 19:08:55.790110266 +0000 UTC m=+0.135709283 container attach 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:08:55 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 20 14:08:55 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 20 14:08:56 np0005589310 lvm[115539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:08:56 np0005589310 lvm[115542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:08:56 np0005589310 lvm[115542]: VG ceph_vg1 finished
Jan 20 14:08:56 np0005589310 lvm[115539]: VG ceph_vg0 finished
Jan 20 14:08:56 np0005589310 lvm[115544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:08:56 np0005589310 lvm[115544]: VG ceph_vg2 finished
Jan 20 14:08:56 np0005589310 lvm[115546]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:08:56 np0005589310 lvm[115546]: VG ceph_vg1 finished
Jan 20 14:08:56 np0005589310 systemd-logind[797]: New session 40 of user zuul.
Jan 20 14:08:56 np0005589310 systemd[1]: Started Session 40 of User zuul.
Jan 20 14:08:56 np0005589310 youthful_kirch[115460]: {}
Jan 20 14:08:56 np0005589310 systemd[1]: libpod-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Deactivated successfully.
Jan 20 14:08:56 np0005589310 systemd[1]: libpod-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Consumed 1.412s CPU time.
Jan 20 14:08:56 np0005589310 podman[115444]: 2026-01-20 19:08:56.646243734 +0000 UTC m=+0.991842741 container died 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:08:56 np0005589310 systemd[1]: var-lib-containers-storage-overlay-981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d-merged.mount: Deactivated successfully.
Jan 20 14:08:56 np0005589310 podman[115444]: 2026-01-20 19:08:56.700726523 +0000 UTC m=+1.046325500 container remove 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:08:56 np0005589310 systemd[1]: libpod-conmon-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Deactivated successfully.
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:56 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:08:57 np0005589310 python3.9[115738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:08:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 14:08:58 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 14:08:58 np0005589310 python3.9[115894]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:08:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 20 14:08:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 20 14:08:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:08:59 np0005589310 python3.9[116069]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:08:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:08:59 np0005589310 python3.9[116147]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.5ru5ie4z recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:00 np0005589310 python3.9[116299]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:00 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 20 14:09:00 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 20 14:09:00 np0005589310 python3.9[116377]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.5ghv7o77 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:01 np0005589310 python3.9[116529]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:09:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:01 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 20 14:09:01 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 20 14:09:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 20 14:09:01 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 20 14:09:02 np0005589310 python3.9[116681]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:02 np0005589310 python3.9[116759]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:09:03 np0005589310 python3.9[116911]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:03 np0005589310 python3.9[116989]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:09:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 14:09:04 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 14:09:04 np0005589310 python3.9[117141]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:04 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 20 14:09:04 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 20 14:09:04 np0005589310 python3.9[117293]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:05 np0005589310 python3.9[117371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:05 np0005589310 python3.9[117523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:06 np0005589310 python3.9[117601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:06 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 20 14:09:06 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 20 14:09:07 np0005589310 python3.9[117753]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:07 np0005589310 systemd[1]: Reloading.
Jan 20 14:09:07 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:07 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:08 np0005589310 python3.9[117942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:08 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 20 14:09:08 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 20 14:09:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:08 np0005589310 python3.9[118020]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 20 14:09:08 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 20 14:09:09 np0005589310 python3.9[118172]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:09 np0005589310 python3.9[118250]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:10 np0005589310 python3.9[118402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:09:10 np0005589310 systemd[1]: Reloading.
Jan 20 14:09:10 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:09:10 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:09:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 20 14:09:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 20 14:09:10 np0005589310 systemd[1]: Starting Create netns directory...
Jan 20 14:09:10 np0005589310 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:09:10 np0005589310 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:09:10 np0005589310 systemd[1]: Finished Create netns directory.
Jan 20 14:09:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:11 np0005589310 python3.9[118593]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:09:11 np0005589310 network[118610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:09:11 np0005589310 network[118611]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:09:11 np0005589310 network[118612]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:09:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 20 14:09:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:13 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 20 14:09:14 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 14:09:14 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 14:09:14 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 20 14:09:14 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 20 14:09:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:16 np0005589310 python3.9[118874]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 20 14:09:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 20 14:09:16 np0005589310 python3.9[118952]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:17 np0005589310 python3.9[119104]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:18 np0005589310 python3.9[119256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:18 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 20 14:09:18 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 20 14:09:18 np0005589310 python3.9[119334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:19 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 20 14:09:19 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 20 14:09:19 np0005589310 python3.9[119486]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 14:09:19 np0005589310 systemd[1]: Starting Time & Date Service...
Jan 20 14:09:19 np0005589310 systemd[1]: Started Time & Date Service.
Jan 20 14:09:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:20 np0005589310 python3.9[119642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:20 np0005589310 python3.9[119794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:21 np0005589310 python3.9[119872]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:21 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 14:09:21 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 14:09:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:22 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 20 14:09:22 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 20 14:09:22 np0005589310 python3.9[120024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:22 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 20 14:09:22 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 20 14:09:22 np0005589310 python3.9[120102]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4gqboswt recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:23 np0005589310 python3.9[120254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:23 np0005589310 python3.9[120332]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:24 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 20 14:09:24 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 20 14:09:24 np0005589310 python3.9[120484]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:25 np0005589310 python3[120637]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:09:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:26 np0005589310 python3.9[120789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:26 np0005589310 python3.9[120867]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:27 np0005589310 python3.9[121019]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:27 np0005589310 python3.9[121144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936166.659998-308-272978667455818/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 14:09:28 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 14:09:28 np0005589310 python3.9[121296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 20 14:09:28 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 20 14:09:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:28 np0005589310 python3.9[121374]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:29 np0005589310 python3.9[121527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:29 np0005589310 python3.9[121605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 14:09:30 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 14:09:30 np0005589310 python3.9[121757]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:30 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 20 14:09:30 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 20 14:09:31 np0005589310 python3.9[121835]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:09:31
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes']
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:09:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:31 np0005589310 python3.9[121987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:32 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 20 14:09:32 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 20 14:09:32 np0005589310 python3.9[122142]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:33 np0005589310 python3.9[122294]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:34 np0005589310 python3.9[122446]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 14:09:34 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:09:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:09:34 np0005589310 python3.9[122600]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:09:35 np0005589310 python3.9[122752]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 14:09:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 20 14:09:35 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 20 14:09:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:35 np0005589310 systemd[1]: session-40.scope: Deactivated successfully.
Jan 20 14:09:35 np0005589310 systemd[1]: session-40.scope: Consumed 27.993s CPU time.
Jan 20 14:09:35 np0005589310 systemd-logind[797]: Session 40 logged out. Waiting for processes to exit.
Jan 20 14:09:35 np0005589310 systemd-logind[797]: Removed session 40.
Jan 20 14:09:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:39 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 20 14:09:39 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 20 14:09:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 14:09:41 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 14:09:41 np0005589310 systemd-logind[797]: New session 41 of user zuul.
Jan 20 14:09:41 np0005589310 systemd[1]: Started Session 41 of User zuul.
Jan 20 14:09:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:42 np0005589310 python3.9[122933]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 14:09:42 np0005589310 python3.9[123085]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:43 np0005589310 python3.9[123239]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 20 14:09:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:44 np0005589310 python3.9[123391]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.01etno5f follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:09:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:09:44 np0005589310 python3.9[123516]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.01etno5f mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936183.642505-44-41799903480127/.source.01etno5f _original_basename=.7ojkxo3k follow=False checksum=e99902dd0defb60b71293d8fd634ed68435b6950 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 14:09:45 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 14:09:45 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 20 14:09:45 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 20 14:09:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:45 np0005589310 python3.9[123668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:09:46 np0005589310 python3.9[123820]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz3b07HV3uJtYZS5SXFV7UOV5We+VhL7E4MInSTY31YDxLu74UtLEKRyupRLnE9d5cVG8e5JHiBt72dhLY2VbhACUUzWUR1aTUO/jAfEzM97GQgzgl5skY63LeYydonq3csjRREkj9YaliQuWdLTocUhfB/0t0HX525BkLTzTfdhjhDOY6NzeJUhZjMKy9uM/RZvITLdPgnYTjcLN12hAtWjUGKvAcUEfWpRW0efbUgaPSuNuRxZWXNuusp0UBopS1fv5P4Ea0VhwUmNZ0IJC3eljfUuHXRdQr6A4px/e8yVSwUILaYNL6ettCVX8HNvIxk6xmT5clWgr+Vibu+qnmAoOdOqoRYdZgH/26kU5ZMOYv8wpa/TUoXbD1ClrmNUQNjD4kSFXQtI1uhLxuNYTzf4ftLLy92oo3ENBg4Oph0Hw00CUPNDcsAgD65KYg8/Frjms4h8AUjYrV2ktrqAPVEvcItbD5e7/cAcF1AnB9aHpNzgUo1iUbMmXN2/I/fQ0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5Jhg8QlHJt93+bopoKxGN+UwIsXQojyFhcp0nCuLCA#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNoSkRzTUMXF81nHL5zY2fe7DfBkbvi2MFoFs1WurMuV9pkgr/kpqf2yHrz5D04ncV4FFj7hs+/ZPi7NjXPcIw=#012 create=True mode=0644 path=/tmp/ansible.01etno5f state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:47 np0005589310 python3.9[123972]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.01etno5f' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:47 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 20 14:09:47 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 20 14:09:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:48 np0005589310 python3.9[124126]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.01etno5f state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 20 14:09:48 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 20 14:09:48 np0005589310 systemd-logind[797]: Session 41 logged out. Waiting for processes to exit.
Jan 20 14:09:48 np0005589310 systemd[1]: session-41.scope: Deactivated successfully.
Jan 20 14:09:48 np0005589310 systemd[1]: session-41.scope: Consumed 4.832s CPU time.
Jan 20 14:09:48 np0005589310 systemd-logind[797]: Removed session 41.
Jan 20 14:09:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:49 np0005589310 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 14:09:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:53 np0005589310 systemd-logind[797]: New session 42 of user zuul.
Jan 20 14:09:53 np0005589310 systemd[1]: Started Session 42 of User zuul.
Jan 20 14:09:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 20 14:09:54 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 20 14:09:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 14:09:55 np0005589310 python3.9[124306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:09:55 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 14:09:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:56 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 20 14:09:56 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 20 14:09:56 np0005589310 python3.9[124462]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:09:57 np0005589310 python3.9[124616]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:09:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 20 14:09:57 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 20 14:09:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:09:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:09:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:09:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:09:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:09:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:57 np0005589310 python3.9[124851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:09:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:09:58 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 20 14:09:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:09:58 np0005589310 python3.9[125054]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.80208183 +0000 UTC m=+0.050001549 container create 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 20 14:09:58 np0005589310 systemd[1]: Started libpod-conmon-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope.
Jan 20 14:09:58 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.777610864 +0000 UTC m=+0.025530613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.888043117 +0000 UTC m=+0.135962856 container init 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.898697681 +0000 UTC m=+0.146617400 container start 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.902731971 +0000 UTC m=+0.150651710 container attach 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:09:58 np0005589310 vigilant_kalam[125083]: 167 167
Jan 20 14:09:58 np0005589310 systemd[1]: libpod-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope: Deactivated successfully.
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.906847523 +0000 UTC m=+0.154767252 container died 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:58 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7f28e1f526127f0c632dd6a1ea6629485740b65235f306b0377de782ddaf7a89-merged.mount: Deactivated successfully.
Jan 20 14:09:58 np0005589310 podman[125067]: 2026-01-20 19:09:58.947182581 +0000 UTC m=+0.195102300 container remove 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:58 np0005589310 systemd[1]: libpod-conmon-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope: Deactivated successfully.
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.097427739 +0000 UTC m=+0.043261541 container create c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:09:59 np0005589310 systemd[1]: Started libpod-conmon-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope.
Jan 20 14:09:59 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:09:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.165779201 +0000 UTC m=+0.111613023 container init c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.077182958 +0000 UTC m=+0.023016810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.17824505 +0000 UTC m=+0.124078852 container start c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.181984052 +0000 UTC m=+0.127817874 container attach c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:09:59 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:09:59 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:09:59 np0005589310 happy_brown[125200]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:09:59 np0005589310 happy_brown[125200]: --> All data devices are unavailable
Jan 20 14:09:59 np0005589310 systemd[1]: libpod-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope: Deactivated successfully.
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.630296766 +0000 UTC m=+0.576130568 container died c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:09:59 np0005589310 python3.9[125285]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:09:59 np0005589310 systemd[1]: var-lib-containers-storage-overlay-be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535-merged.mount: Deactivated successfully.
Jan 20 14:09:59 np0005589310 podman[125172]: 2026-01-20 19:09:59.682170071 +0000 UTC m=+0.628003893 container remove c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 14:09:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:09:59 np0005589310 systemd[1]: libpod-conmon-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope: Deactivated successfully.
Jan 20 14:09:59 np0005589310 systemd[1]: session-42.scope: Deactivated successfully.
Jan 20 14:09:59 np0005589310 systemd[1]: session-42.scope: Consumed 4.067s CPU time.
Jan 20 14:09:59 np0005589310 systemd-logind[797]: Session 42 logged out. Waiting for processes to exit.
Jan 20 14:10:00 np0005589310 systemd-logind[797]: Removed session 42.
Jan 20 14:10:00 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 14:10:00 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.146799689 +0000 UTC m=+0.042692847 container create fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:10:00 np0005589310 systemd[1]: Started libpod-conmon-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope.
Jan 20 14:10:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.130109276 +0000 UTC m=+0.026002464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.231098796 +0000 UTC m=+0.126991974 container init fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.239624567 +0000 UTC m=+0.135517725 container start fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.243353828 +0000 UTC m=+0.139247006 container attach fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:10:00 np0005589310 adoring_aryabhata[125411]: 167 167
Jan 20 14:10:00 np0005589310 systemd[1]: libpod-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope: Deactivated successfully.
Jan 20 14:10:00 np0005589310 conmon[125411]: conmon fdb67046f0b76a2fe36d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope/container/memory.events
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.247079151 +0000 UTC m=+0.142972309 container died fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:10:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0503282bc40603d8e765c9786ee821937a5e99bf01cf432e1538a7e0f9df7247-merged.mount: Deactivated successfully.
Jan 20 14:10:00 np0005589310 podman[125395]: 2026-01-20 19:10:00.287696726 +0000 UTC m=+0.183589884 container remove fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 14:10:00 np0005589310 systemd[1]: libpod-conmon-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope: Deactivated successfully.
Jan 20 14:10:00 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.460263956 +0000 UTC m=+0.052298834 container create 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:10:00 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 20 14:10:00 np0005589310 systemd[1]: Started libpod-conmon-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope.
Jan 20 14:10:00 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:10:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:00 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.435694529 +0000 UTC m=+0.027729437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.532990226 +0000 UTC m=+0.125025174 container init 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.538444582 +0000 UTC m=+0.130479480 container start 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.542582384 +0000 UTC m=+0.134617282 container attach 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]: {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    "0": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "devices": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "/dev/loop3"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            ],
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_name": "ceph_lv0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_size": "21470642176",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "name": "ceph_lv0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "tags": {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_name": "ceph",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.crush_device_class": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.encrypted": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.objectstore": "bluestore",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_id": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.vdo": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.with_tpm": "0"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            },
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "vg_name": "ceph_vg0"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        }
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    ],
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    "1": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "devices": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "/dev/loop4"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            ],
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_name": "ceph_lv1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_size": "21470642176",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "name": "ceph_lv1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "tags": {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_name": "ceph",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.crush_device_class": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.encrypted": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.objectstore": "bluestore",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_id": "1",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.vdo": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.with_tpm": "0"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            },
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "vg_name": "ceph_vg1"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        }
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    ],
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    "2": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "devices": [
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "/dev/loop5"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            ],
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_name": "ceph_lv2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_size": "21470642176",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "name": "ceph_lv2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "tags": {
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.cluster_name": "ceph",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.crush_device_class": "",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.encrypted": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.objectstore": "bluestore",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osd_id": "2",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.vdo": "0",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:                "ceph.with_tpm": "0"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            },
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "type": "block",
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:            "vg_name": "ceph_vg2"
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:        }
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]:    ]
Jan 20 14:10:00 np0005589310 angry_lederberg[125452]: }
Jan 20 14:10:00 np0005589310 systemd[1]: libpod-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope: Deactivated successfully.
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.846537546 +0000 UTC m=+0.438572434 container died 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:10:00 np0005589310 systemd[1]: var-lib-containers-storage-overlay-aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699-merged.mount: Deactivated successfully.
Jan 20 14:10:00 np0005589310 podman[125435]: 2026-01-20 19:10:00.905168327 +0000 UTC m=+0.497203255 container remove 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:00 np0005589310 systemd[1]: libpod-conmon-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope: Deactivated successfully.
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.414235666 +0000 UTC m=+0.059169806 container create 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Jan 20 14:10:01 np0005589310 systemd[1]: Started libpod-conmon-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope.
Jan 20 14:10:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.392838956 +0000 UTC m=+0.037773156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.497847745 +0000 UTC m=+0.142781915 container init 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.504857768 +0000 UTC m=+0.149791918 container start 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.508244102 +0000 UTC m=+0.153178272 container attach 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:10:01 np0005589310 gallant_meninsky[125552]: 167 167
Jan 20 14:10:01 np0005589310 systemd[1]: libpod-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope: Deactivated successfully.
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.510999581 +0000 UTC m=+0.155933741 container died 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:01 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c4e12ffd98d344e19f5d459d015af7a524526635d240447d3f95bde3bc7b46ae-merged.mount: Deactivated successfully.
Jan 20 14:10:01 np0005589310 podman[125535]: 2026-01-20 19:10:01.551313359 +0000 UTC m=+0.196247509 container remove 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:10:01 np0005589310 systemd[1]: libpod-conmon-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope: Deactivated successfully.
Jan 20 14:10:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:01 np0005589310 podman[125577]: 2026-01-20 19:10:01.726043122 +0000 UTC m=+0.043263142 container create 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:10:01 np0005589310 systemd[1]: Started libpod-conmon-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope.
Jan 20 14:10:01 np0005589310 podman[125577]: 2026-01-20 19:10:01.70533993 +0000 UTC m=+0.022559970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:10:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:10:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:10:01 np0005589310 podman[125577]: 2026-01-20 19:10:01.823033773 +0000 UTC m=+0.140253803 container init 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:10:01 np0005589310 podman[125577]: 2026-01-20 19:10:01.828727504 +0000 UTC m=+0.145947514 container start 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:10:01 np0005589310 podman[125577]: 2026-01-20 19:10:01.832351383 +0000 UTC m=+0.149571413 container attach 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:10:01 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 20 14:10:02 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 20 14:10:02 np0005589310 lvm[125673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:10:02 np0005589310 lvm[125670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:10:02 np0005589310 lvm[125673]: VG ceph_vg1 finished
Jan 20 14:10:02 np0005589310 lvm[125670]: VG ceph_vg0 finished
Jan 20 14:10:02 np0005589310 lvm[125675]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:10:02 np0005589310 lvm[125675]: VG ceph_vg2 finished
Jan 20 14:10:02 np0005589310 naughty_allen[125594]: {}
Jan 20 14:10:02 np0005589310 systemd[1]: libpod-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Deactivated successfully.
Jan 20 14:10:02 np0005589310 systemd[1]: libpod-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Consumed 1.450s CPU time.
Jan 20 14:10:02 np0005589310 podman[125577]: 2026-01-20 19:10:02.731897455 +0000 UTC m=+1.049117465 container died 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:10:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e-merged.mount: Deactivated successfully.
Jan 20 14:10:02 np0005589310 podman[125577]: 2026-01-20 19:10:02.781554924 +0000 UTC m=+1.098774924 container remove 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:10:02 np0005589310 systemd[1]: libpod-conmon-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Deactivated successfully.
Jan 20 14:10:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:10:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:10:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:10:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:10:02 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 20 14:10:03 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 20 14:10:03 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 20 14:10:03 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 20 14:10:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:10:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:04 np0005589310 systemd-logind[797]: New session 43 of user zuul.
Jan 20 14:10:04 np0005589310 systemd[1]: Started Session 43 of User zuul.
Jan 20 14:10:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 20 14:10:05 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 20 14:10:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:05 np0005589310 python3.9[125867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:10:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 14:10:06 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 14:10:06 np0005589310 python3.9[126023]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:10:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 20 14:10:07 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 20 14:10:07 np0005589310 python3.9[126107]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 14:10:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:08 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 20 14:10:08 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 20 14:10:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:09 np0005589310 python3.9[126258]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:10:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 20 14:10:10 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 20 14:10:10 np0005589310 python3.9[126409]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:10:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 20 14:10:11 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 20 14:10:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:11 np0005589310 python3.9[126559]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:10:12 np0005589310 python3.9[126709]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:10:12 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 20 14:10:12 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 20 14:10:12 np0005589310 systemd[1]: session-43.scope: Deactivated successfully.
Jan 20 14:10:12 np0005589310 systemd[1]: session-43.scope: Consumed 5.843s CPU time.
Jan 20 14:10:12 np0005589310 systemd-logind[797]: Session 43 logged out. Waiting for processes to exit.
Jan 20 14:10:12 np0005589310 systemd-logind[797]: Removed session 43.
Jan 20 14:10:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 20 14:10:16 np0005589310 systemd[1]: session-18.scope: Deactivated successfully.
Jan 20 14:10:16 np0005589310 systemd[1]: session-18.scope: Consumed 1min 55.829s CPU time.
Jan 20 14:10:16 np0005589310 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Jan 20 14:10:16 np0005589310 systemd-logind[797]: Removed session 18.
Jan 20 14:10:16 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 20 14:10:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:18 np0005589310 systemd-logind[797]: New session 44 of user zuul.
Jan 20 14:10:18 np0005589310 systemd[1]: Started Session 44 of User zuul.
Jan 20 14:10:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:19 np0005589310 python3.9[126887]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:10:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:20 np0005589310 python3.9[127043]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:21 np0005589310 python3.9[127195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:22 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 20 14:10:22 np0005589310 python3.9[127347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:22 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 20 14:10:23 np0005589310 python3.9[127470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936221.8017986-60-33565049330087/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d4257a70fdd0e32e402a88c76489fb75b7e683f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 20 14:10:23 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 20 14:10:23 np0005589310 python3.9[127622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:24 np0005589310 python3.9[127745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936223.1394405-60-215435658161296/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=152233ee71f040918347d87ff03f6885e159af40 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:24 np0005589310 python3.9[127897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:25 np0005589310 python3.9[128020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936224.1759803-60-239020377512394/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9453dc17e0dd9df101138e7ca8744fe471f47316 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 20 14:10:25 np0005589310 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 20 14:10:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:25 np0005589310 python3.9[128172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:26 np0005589310 python3.9[128324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:26 np0005589310 python3.9[128476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:27 np0005589310 python3.9[128599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936226.4822671-119-21557831655293/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=72e76094c7443781bf758a7464981f2b70fe5291 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:27 np0005589310 python3.9[128751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:28 np0005589310 python3.9[128874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936227.5784624-119-79455915135213/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a18bf0ee72aa50109151ff784db14fca75746767 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:29 np0005589310 python3.9[129026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:29 np0005589310 python3.9[129149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936228.631237-119-83399760921554/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d703e43b59f2c47bf9794e81afbf179a565c6333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:30 np0005589310 python3.9[129301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:30 np0005589310 python3.9[129453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:31 np0005589310 python3.9[129605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:10:31
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'backups']
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:10:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:32 np0005589310 python3.9[129728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936231.0116086-178-149965609728351/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=85f679b0dc57f98e831d1c0dde8acc81b42034a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:32 np0005589310 python3.9[129880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:33 np0005589310 python3.9[130003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936232.1903186-178-191228168464007/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a18bf0ee72aa50109151ff784db14fca75746767 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:33 np0005589310 python3.9[130155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:34 np0005589310 python3.9[130278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936233.258287-178-34417673641001/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9490ef0441c17c9b1176677fb60ad630695d18c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:10:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:10:35 np0005589310 python3.9[130430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:35 np0005589310 python3.9[130582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:36 np0005589310 python3.9[130705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936235.482825-246-45545398084939/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:37 np0005589310 python3.9[130857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:37 np0005589310 python3.9[131009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:38 np0005589310 python3.9[131132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936237.1839015-270-252727844320334/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:38 np0005589310 python3.9[131284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:39 np0005589310 python3.9[131436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:39 np0005589310 python3.9[131559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936238.878624-294-243176634967829/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:40 np0005589310 python3.9[131711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:41 np0005589310 python3.9[131863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:41 np0005589310 python3.9[131986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936240.7462938-318-258365391249608/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:42 np0005589310 python3.9[132138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:42 np0005589310 python3.9[132290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:43 np0005589310 python3.9[132413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936242.4761004-342-42346712225165/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:44 np0005589310 python3.9[132565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:10:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:10:44 np0005589310 python3.9[132717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:45 np0005589310 python3.9[132840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936244.1598256-366-93601247695196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:45 np0005589310 systemd[1]: session-44.scope: Deactivated successfully.
Jan 20 14:10:45 np0005589310 systemd[1]: session-44.scope: Consumed 21.174s CPU time.
Jan 20 14:10:45 np0005589310 systemd-logind[797]: Session 44 logged out. Waiting for processes to exit.
Jan 20 14:10:45 np0005589310 systemd-logind[797]: Removed session 44.
Jan 20 14:10:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:50 np0005589310 systemd-logind[797]: New session 45 of user zuul.
Jan 20 14:10:50 np0005589310 systemd[1]: Started Session 45 of User zuul.
Jan 20 14:10:51 np0005589310 python3.9[133020]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:52 np0005589310 python3.9[133172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:52 np0005589310 python3.9[133295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936251.4793594-29-186958417720475/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=82f4fc7876a2f5ec58c3b05a59c81182fa299df3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:53 np0005589310 python3.9[133447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:10:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:53 np0005589310 python3.9[133570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936252.794772-29-213823089534989/.source.conf _original_basename=ceph.conf follow=False checksum=07857ecc6916485d0d36f394eaef27670eedaf2c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:10:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:54 np0005589310 systemd[1]: session-45.scope: Deactivated successfully.
Jan 20 14:10:54 np0005589310 systemd[1]: session-45.scope: Consumed 2.450s CPU time.
Jan 20 14:10:54 np0005589310 systemd-logind[797]: Session 45 logged out. Waiting for processes to exit.
Jan 20 14:10:54 np0005589310 systemd-logind[797]: Removed session 45.
Jan 20 14:10:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:10:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:10:59 np0005589310 systemd-logind[797]: New session 46 of user zuul.
Jan 20 14:10:59 np0005589310 systemd[1]: Started Session 46 of User zuul.
Jan 20 14:10:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:00 np0005589310 python3.9[133748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:11:01 np0005589310 python3.9[133904]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:02 np0005589310 python3.9[134056]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:02 np0005589310 python3.9[134206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:11:03 np0005589310 python3.9[134426]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:11:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:03 np0005589310 podman[134503]: 2026-01-20 19:11:03.916917014 +0000 UTC m=+0.049143639 container create c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:03 np0005589310 systemd[1]: Started libpod-conmon-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope.
Jan 20 14:11:03 np0005589310 podman[134503]: 2026-01-20 19:11:03.891301201 +0000 UTC m=+0.023527856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:04 np0005589310 podman[134503]: 2026-01-20 19:11:04.011875453 +0000 UTC m=+0.144102098 container init c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 14:11:04 np0005589310 podman[134503]: 2026-01-20 19:11:04.018909893 +0000 UTC m=+0.151136518 container start c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:11:04 np0005589310 podman[134503]: 2026-01-20 19:11:04.023025817 +0000 UTC m=+0.155252432 container attach c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:11:04 np0005589310 systemd[1]: libpod-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope: Deactivated successfully.
Jan 20 14:11:04 np0005589310 infallible_mendeleev[134519]: 167 167
Jan 20 14:11:04 np0005589310 conmon[134519]: conmon c2811b91f76d4bb72029 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope/container/memory.events
Jan 20 14:11:04 np0005589310 podman[134503]: 2026-01-20 19:11:04.026693139 +0000 UTC m=+0.158919754 container died c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:11:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ad3021c7950182cd65172122db53e878a6471cb3767f0b4736290b10d77abf4a-merged.mount: Deactivated successfully.
Jan 20 14:11:04 np0005589310 podman[134503]: 2026-01-20 19:11:04.072344158 +0000 UTC m=+0.204570783 container remove c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:11:04 np0005589310 systemd[1]: libpod-conmon-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope: Deactivated successfully.
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.236766187 +0000 UTC m=+0.051754938 container create af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:11:04 np0005589310 systemd[1]: Started libpod-conmon-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope.
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.219279139 +0000 UTC m=+0.034267900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:04 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 20 14:11:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.352166601 +0000 UTC m=+0.167155392 container init af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.361173186 +0000 UTC m=+0.176161927 container start af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.364713587 +0000 UTC m=+0.179702378 container attach af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:04 np0005589310 elastic_sanderson[134559]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:11:04 np0005589310 elastic_sanderson[134559]: --> All data devices are unavailable
Jan 20 14:11:04 np0005589310 systemd[1]: libpod-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope: Deactivated successfully.
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.834117861 +0000 UTC m=+0.649106602 container died af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:11:04 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9-merged.mount: Deactivated successfully.
Jan 20 14:11:04 np0005589310 podman[134543]: 2026-01-20 19:11:04.913820393 +0000 UTC m=+0.728809134 container remove af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:11:04 np0005589310 systemd[1]: libpod-conmon-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope: Deactivated successfully.
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.340601888 +0000 UTC m=+0.041451364 container create fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:11:05 np0005589310 systemd[1]: Started libpod-conmon-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope.
Jan 20 14:11:05 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.321598996 +0000 UTC m=+0.022448492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.417679321 +0000 UTC m=+0.118528817 container init fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.425001217 +0000 UTC m=+0.125850693 container start fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:05 np0005589310 reverent_hopper[134823]: 167 167
Jan 20 14:11:05 np0005589310 systemd[1]: libpod-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope: Deactivated successfully.
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.430033102 +0000 UTC m=+0.130882598 container attach fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.430460041 +0000 UTC m=+0.131309527 container died fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:11:05 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8fb48acfa2dbf83eb9f761a16caabdc8ba5a246523ace10a8eb6bc40a40f42a0-merged.mount: Deactivated successfully.
Jan 20 14:11:05 np0005589310 podman[134807]: 2026-01-20 19:11:05.462119851 +0000 UTC m=+0.162969327 container remove fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:11:05 np0005589310 systemd[1]: libpod-conmon-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope: Deactivated successfully.
Jan 20 14:11:05 np0005589310 python3.9[134796]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:11:05 np0005589310 podman[134854]: 2026-01-20 19:11:05.647718702 +0000 UTC m=+0.069840310 container create 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:11:05 np0005589310 systemd[1]: Started libpod-conmon-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope.
Jan 20 14:11:05 np0005589310 podman[134854]: 2026-01-20 19:11:05.620432211 +0000 UTC m=+0.042553899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:05 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:05 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:05 np0005589310 podman[134854]: 2026-01-20 19:11:05.741215548 +0000 UTC m=+0.163337146 container init 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:05 np0005589310 podman[134854]: 2026-01-20 19:11:05.748478623 +0000 UTC m=+0.170600211 container start 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:11:05 np0005589310 podman[134854]: 2026-01-20 19:11:05.753174819 +0000 UTC m=+0.175296417 container attach 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:11:06 np0005589310 goofy_moore[134872]: {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    "0": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "devices": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "/dev/loop3"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            ],
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_name": "ceph_lv0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_size": "21470642176",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "name": "ceph_lv0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "tags": {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_name": "ceph",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.crush_device_class": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.encrypted": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.objectstore": "bluestore",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_id": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.vdo": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.with_tpm": "0"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            },
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "vg_name": "ceph_vg0"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        }
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    ],
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    "1": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "devices": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "/dev/loop4"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            ],
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_name": "ceph_lv1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_size": "21470642176",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "name": "ceph_lv1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "tags": {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_name": "ceph",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.crush_device_class": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.encrypted": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.objectstore": "bluestore",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_id": "1",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.vdo": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.with_tpm": "0"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            },
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "vg_name": "ceph_vg1"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        }
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    ],
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    "2": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "devices": [
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "/dev/loop5"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            ],
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_name": "ceph_lv2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_size": "21470642176",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "name": "ceph_lv2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "tags": {
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.cluster_name": "ceph",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.crush_device_class": "",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.encrypted": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.objectstore": "bluestore",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osd_id": "2",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.vdo": "0",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:                "ceph.with_tpm": "0"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            },
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "type": "block",
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:            "vg_name": "ceph_vg2"
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:        }
Jan 20 14:11:06 np0005589310 goofy_moore[134872]:    ]
Jan 20 14:11:06 np0005589310 goofy_moore[134872]: }
Jan 20 14:11:06 np0005589310 systemd[1]: libpod-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope: Deactivated successfully.
Jan 20 14:11:06 np0005589310 podman[134854]: 2026-01-20 19:11:06.054594154 +0000 UTC m=+0.476715762 container died 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:11:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207-merged.mount: Deactivated successfully.
Jan 20 14:11:06 np0005589310 podman[134854]: 2026-01-20 19:11:06.097812877 +0000 UTC m=+0.519934465 container remove 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:11:06 np0005589310 systemd[1]: libpod-conmon-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope: Deactivated successfully.
Jan 20 14:11:06 np0005589310 python3.9[134985]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:11:06 np0005589310 podman[135031]: 2026-01-20 19:11:06.526865953 +0000 UTC m=+0.041069514 container create 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:11:06 np0005589310 systemd[1]: Started libpod-conmon-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope.
Jan 20 14:11:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:06 np0005589310 podman[135031]: 2026-01-20 19:11:06.597626602 +0000 UTC m=+0.111830183 container init 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:06 np0005589310 podman[135031]: 2026-01-20 19:11:06.507561214 +0000 UTC m=+0.021764785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:06 np0005589310 podman[135031]: 2026-01-20 19:11:06.606215318 +0000 UTC m=+0.120418879 container start 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 14:11:06 np0005589310 podman[135031]: 2026-01-20 19:11:06.609696027 +0000 UTC m=+0.123899608 container attach 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:11:06 np0005589310 nervous_bhaskara[135047]: 167 167
Jan 20 14:11:06 np0005589310 systemd[1]: libpod-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope: Deactivated successfully.
Jan 20 14:11:06 np0005589310 podman[135052]: 2026-01-20 19:11:06.65511763 +0000 UTC m=+0.027659100 container died 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1291c1fff46ff1f6f82bb846a72fbf25b581160030e7e850b5191787e42ccece-merged.mount: Deactivated successfully.
Jan 20 14:11:06 np0005589310 podman[135052]: 2026-01-20 19:11:06.694388253 +0000 UTC m=+0.066929633 container remove 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:11:06 np0005589310 systemd[1]: libpod-conmon-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope: Deactivated successfully.
Jan 20 14:11:06 np0005589310 podman[135074]: 2026-01-20 19:11:06.876584316 +0000 UTC m=+0.043748877 container create 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:11:06 np0005589310 systemd[1]: Started libpod-conmon-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope.
Jan 20 14:11:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:06 np0005589310 podman[135074]: 2026-01-20 19:11:06.855928446 +0000 UTC m=+0.023093057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:11:06 np0005589310 podman[135074]: 2026-01-20 19:11:06.954393535 +0000 UTC m=+0.121558126 container init 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:11:06 np0005589310 podman[135074]: 2026-01-20 19:11:06.962839577 +0000 UTC m=+0.130004138 container start 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:11:06 np0005589310 podman[135074]: 2026-01-20 19:11:06.965939908 +0000 UTC m=+0.133104479 container attach 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:11:07 np0005589310 lvm[135169]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:11:07 np0005589310 lvm[135169]: VG ceph_vg1 finished
Jan 20 14:11:07 np0005589310 lvm[135168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:11:07 np0005589310 lvm[135168]: VG ceph_vg0 finished
Jan 20 14:11:07 np0005589310 lvm[135171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:11:07 np0005589310 lvm[135171]: VG ceph_vg2 finished
Jan 20 14:11:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:07 np0005589310 optimistic_robinson[135090]: {}
Jan 20 14:11:07 np0005589310 systemd[1]: libpod-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Deactivated successfully.
Jan 20 14:11:07 np0005589310 podman[135074]: 2026-01-20 19:11:07.790523988 +0000 UTC m=+0.957688559 container died 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:11:07 np0005589310 systemd[1]: libpod-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Consumed 1.275s CPU time.
Jan 20 14:11:07 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074-merged.mount: Deactivated successfully.
Jan 20 14:11:07 np0005589310 podman[135074]: 2026-01-20 19:11:07.83411486 +0000 UTC m=+1.001279421 container remove 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:11:07 np0005589310 systemd[1]: libpod-conmon-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Deactivated successfully.
Jan 20 14:11:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:11:07 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:07 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:11:07 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:08 np0005589310 python3.9[135363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:11:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:11:09 np0005589310 python3[135518]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 20 14:11:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:10 np0005589310 python3.9[135670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:11 np0005589310 python3.9[135822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:11 np0005589310 python3.9[135900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:12 np0005589310 python3.9[136052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:12 np0005589310 python3.9[136130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.e1ck4hfr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:13 np0005589310 python3.9[136282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:13 np0005589310 python3.9[136360]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:14 np0005589310 python3.9[136512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:15 np0005589310 python3[136665]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:11:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:15 np0005589310 python3.9[136817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:16 np0005589310 python3.9[136942]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936275.4247189-152-10625078794513/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:17 np0005589310 python3.9[137094]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:17 np0005589310 python3.9[137219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936276.756451-167-12054507933329/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:18 np0005589310 python3.9[137371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:19 np0005589310 python3.9[137496]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936277.9738297-182-62635064492725/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:19 np0005589310 python3.9[137648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:20 np0005589310 python3.9[137773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936279.2076283-197-229631044038092/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:20 np0005589310 python3.9[137925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:21 np0005589310 python3.9[138050]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936280.303058-212-70113298385430/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:22 np0005589310 python3.9[138202]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:22 np0005589310 python3.9[138354]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:23 np0005589310 python3.9[138509]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.754416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283754495, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1750, "num_deletes": 251, "total_data_size": 2460365, "memory_usage": 2510280, "flush_reason": "Manual Compaction"}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283765819, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1453682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7346, "largest_seqno": 9095, "table_properties": {"data_size": 1448050, "index_size": 2515, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17063, "raw_average_key_size": 20, "raw_value_size": 1434400, "raw_average_value_size": 1757, "num_data_blocks": 118, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936126, "oldest_key_time": 1768936126, "file_creation_time": 1768936283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11461 microseconds, and 6163 cpu microseconds.
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.765887) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1453682 bytes OK
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.765910) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767170) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767189) EVENT_LOG_v1 {"time_micros": 1768936283767184, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2452568, prev total WAL file size 2452568, number of live WAL files 2.
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.768150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1419KB)], [20(7642KB)]
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283768247, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9279940, "oldest_snapshot_seqno": -1}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3432 keys, 7302639 bytes, temperature: kUnknown
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283818244, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7302639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7276400, "index_size": 16529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81971, "raw_average_key_size": 23, "raw_value_size": 7211171, "raw_average_value_size": 2101, "num_data_blocks": 731, "num_entries": 3432, "num_filter_entries": 3432, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.818500) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7302639 bytes
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.820799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 145.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.5 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 3874, records dropped: 442 output_compression: NoCompression
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.820822) EVENT_LOG_v1 {"time_micros": 1768936283820810, "job": 6, "event": "compaction_finished", "compaction_time_micros": 50072, "compaction_time_cpu_micros": 16120, "output_level": 6, "num_output_files": 1, "total_output_size": 7302639, "num_input_records": 3874, "num_output_records": 3432, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283821208, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283822773, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.768002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:23 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:11:24 np0005589310 python3.9[138661]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:24 np0005589310 python3.9[138814]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:11:25 np0005589310 python3.9[138968]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:26 np0005589310 python3.9[139123]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:27 np0005589310 python3.9[139273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:11:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:28 np0005589310 python3.9[139426]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:28 np0005589310 ovs-vsctl[139427]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 20 14:11:28 np0005589310 python3.9[139579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:29 np0005589310 python3.9[139734]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:11:29 np0005589310 ovs-vsctl[139735]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 20 14:11:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:29 np0005589310 python3.9[139885]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:11:30 np0005589310 python3.9[140039]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:31 np0005589310 python3.9[140191]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:11:31
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups']
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:11:31 np0005589310 python3.9[140269]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:32 np0005589310 python3.9[140421]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:32 np0005589310 python3.9[140499]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:33 np0005589310 python3.9[140651]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:33 np0005589310 python3.9[140803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:34 np0005589310 python3.9[140881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:11:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:11:34 np0005589310 python3.9[141033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:35 np0005589310 python3.9[141111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:35 np0005589310 python3.9[141263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:11:35 np0005589310 systemd[1]: Reloading.
Jan 20 14:11:36 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:11:36 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:11:37 np0005589310 python3.9[141452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:37 np0005589310 python3.9[141530]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:38 np0005589310 python3.9[141682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:38 np0005589310 python3.9[141760]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:39 np0005589310 python3.9[141912]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:11:39 np0005589310 systemd[1]: Reloading.
Jan 20 14:11:39 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:11:39 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:11:39 np0005589310 systemd[1]: Starting Create netns directory...
Jan 20 14:11:39 np0005589310 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:11:39 np0005589310 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:11:39 np0005589310 systemd[1]: Finished Create netns directory.
Jan 20 14:11:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:40 np0005589310 python3.9[142105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:40 np0005589310 python3.9[142257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:41 np0005589310 python3.9[142380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936300.4849029-463-239350650588294/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:42 np0005589310 python3.9[142532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:42 np0005589310 python3.9[142684]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:11:43 np0005589310 python3.9[142836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:11:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:44 np0005589310 python3.9[142959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936303.0507336-496-266932400009190/.source.json _original_basename=.2kujvss6 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:11:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:11:44 np0005589310 python3.9[143109]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:46 np0005589310 python3.9[143532]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 20 14:11:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:47 np0005589310 python3.9[143684]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:11:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:48 np0005589310 python3[143836]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:11:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:54 np0005589310 podman[143849]: 2026-01-20 19:11:54.212977332 +0000 UTC m=+5.197244125 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:11:54 np0005589310 podman[143971]: 2026-01-20 19:11:54.358798178 +0000 UTC m=+0.047243266 container create c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:11:54 np0005589310 podman[143971]: 2026-01-20 19:11:54.335260472 +0000 UTC m=+0.023705590 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:11:54 np0005589310 python3[143836]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 14:11:55 np0005589310 python3.9[144159]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:11:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:55 np0005589310 python3.9[144313]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:56 np0005589310 python3.9[144389]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:11:56 np0005589310 python3.9[144540]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936316.3312128-574-239768728118749/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:11:57 np0005589310 python3.9[144616]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:11:57 np0005589310 systemd[1]: Reloading.
Jan 20 14:11:57 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:11:57 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:11:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:58 np0005589310 python3.9[144729]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:11:58 np0005589310 systemd[1]: Reloading.
Jan 20 14:11:58 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:11:58 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:11:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:11:58 np0005589310 systemd[1]: Starting ovn_controller container...
Jan 20 14:11:59 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:11:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c442f5e904669fed25c3c9d2416fe551779526f820d8f46063b8f88c0556cc0f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 20 14:11:59 np0005589310 systemd[1]: Started /usr/bin/podman healthcheck run c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a.
Jan 20 14:11:59 np0005589310 podman[144771]: 2026-01-20 19:11:59.112124458 +0000 UTC m=+0.216290809 container init c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + sudo -E kolla_set_configs
Jan 20 14:11:59 np0005589310 podman[144771]: 2026-01-20 19:11:59.139210243 +0000 UTC m=+0.243376574 container start c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 14:11:59 np0005589310 systemd[1]: Created slice User Slice of UID 0.
Jan 20 14:11:59 np0005589310 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 20 14:11:59 np0005589310 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 20 14:11:59 np0005589310 systemd[1]: Starting User Manager for UID 0...
Jan 20 14:11:59 np0005589310 edpm-start-podman-container[144771]: ovn_controller
Jan 20 14:11:59 np0005589310 edpm-start-podman-container[144770]: Creating additional drop-in dependency for "ovn_controller" (c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a)
Jan 20 14:11:59 np0005589310 systemd[1]: Reloading.
Jan 20 14:11:59 np0005589310 podman[144794]: 2026-01-20 19:11:59.29954768 +0000 UTC m=+0.150325339 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 14:11:59 np0005589310 systemd[144806]: Queued start job for default target Main User Target.
Jan 20 14:11:59 np0005589310 systemd[144806]: Created slice User Application Slice.
Jan 20 14:11:59 np0005589310 systemd[144806]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 20 14:11:59 np0005589310 systemd[144806]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 14:11:59 np0005589310 systemd[144806]: Reached target Paths.
Jan 20 14:11:59 np0005589310 systemd[144806]: Reached target Timers.
Jan 20 14:11:59 np0005589310 systemd[144806]: Starting D-Bus User Message Bus Socket...
Jan 20 14:11:59 np0005589310 systemd[144806]: Starting Create User's Volatile Files and Directories...
Jan 20 14:11:59 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:11:59 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:11:59 np0005589310 systemd[144806]: Finished Create User's Volatile Files and Directories.
Jan 20 14:11:59 np0005589310 systemd[144806]: Listening on D-Bus User Message Bus Socket.
Jan 20 14:11:59 np0005589310 systemd[144806]: Reached target Sockets.
Jan 20 14:11:59 np0005589310 systemd[144806]: Reached target Basic System.
Jan 20 14:11:59 np0005589310 systemd[144806]: Reached target Main User Target.
Jan 20 14:11:59 np0005589310 systemd[144806]: Startup finished in 151ms.
Jan 20 14:11:59 np0005589310 systemd[1]: Started User Manager for UID 0.
Jan 20 14:11:59 np0005589310 systemd[1]: Started ovn_controller container.
Jan 20 14:11:59 np0005589310 systemd[1]: c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a-1001d1e7b577e2eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 14:11:59 np0005589310 systemd[1]: c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a-1001d1e7b577e2eb.service: Failed with result 'exit-code'.
Jan 20 14:11:59 np0005589310 systemd[1]: Started Session c1 of User root.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: INFO:__main__:Validating config file
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: INFO:__main__:Writing out command to execute
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: ++ cat /run_command
Jan 20 14:11:59 np0005589310 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + ARGS=
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + sudo kolla_copy_cacerts
Jan 20 14:11:59 np0005589310 systemd[1]: Started Session c2 of User root.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + [[ ! -n '' ]]
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + . kolla_extend_start
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + umask 0022
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 20 14:11:59 np0005589310 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7158] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7167] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <warn>  [1768936319.7169] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7175] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7180] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7183] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 14:11:59 np0005589310 kernel: br-int: entered promiscuous mode
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00015|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:11:59 np0005589310 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7332] manager: (ovn-beb8dd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 20 14:11:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:11:59 np0005589310 kernel: genev_sys_6081: entered promiscuous mode
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7524] device (genev_sys_6081): carrier: link connected
Jan 20 14:11:59 np0005589310 NetworkManager[48913]: <info>  [1768936319.7527] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 20 14:11:59 np0005589310 systemd-udevd[144920]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:11:59 np0005589310 systemd-udevd[144923]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 14:12:00 np0005589310 python3.9[145051]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:12:01 np0005589310 python3.9[145203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:01 np0005589310 python3.9[145326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936320.7322195-619-245181751861086/.source.yaml _original_basename=.4fa6tlds follow=False checksum=8b5a37e67ac838beaa0c9af9ba2de80244d453f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:02 np0005589310 python3.9[145478]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:12:02 np0005589310 ovs-vsctl[145479]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 20 14:12:02 np0005589310 python3.9[145631]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:12:02 np0005589310 ovs-vsctl[145633]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 20 14:12:03 np0005589310 python3.9[145786]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:12:03 np0005589310 ovs-vsctl[145787]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 20 14:12:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:04 np0005589310 systemd[1]: session-46.scope: Deactivated successfully.
Jan 20 14:12:04 np0005589310 systemd[1]: session-46.scope: Consumed 56.463s CPU time.
Jan 20 14:12:04 np0005589310 systemd-logind[797]: Session 46 logged out. Waiting for processes to exit.
Jan 20 14:12:04 np0005589310 systemd-logind[797]: Removed session 46.
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:12:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2095 writes, 9262 keys, 2095 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2095 writes, 2095 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2095 writes, 9262 keys, 2095 commit groups, 1.0 writes per commit group, ingest: 12.36 MB, 0.02 MB/s#012Interval WAL: 2095 writes, 2095 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     93.5      0.10              0.02         3    0.032       0      0       0.0       0.0#012  L6      1/0    6.96 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    135.7    119.7      0.12              0.05         2    0.060    7222    732       0.0       0.0#012 Sum      1/0    6.96 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     75.8    108.1      0.22              0.07         5    0.043    7222    732       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     88.4    125.7      0.19              0.07         4    0.046    7222    732       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    135.7    119.7      0.12              0.05         2    0.060    7222    732       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.8      0.06              0.02         2    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 308.00 MB usage: 620.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,529.69 KB,0.167946%) FilterBlock(6,27.86 KB,0.00883325%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 20 14:12:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:08 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.02822951 +0000 UTC m=+0.040070497 container create 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:12:09 np0005589310 systemd[1]: Started libpod-conmon-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope.
Jan 20 14:12:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.011614993 +0000 UTC m=+0.023456010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.132218612 +0000 UTC m=+0.144059629 container init 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.138954422 +0000 UTC m=+0.150795409 container start 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.142792994 +0000 UTC m=+0.154634001 container attach 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 14:12:09 np0005589310 festive_swirles[145971]: 167 167
Jan 20 14:12:09 np0005589310 systemd[1]: libpod-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope: Deactivated successfully.
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.144802503 +0000 UTC m=+0.156643490 container died 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:09 np0005589310 systemd[1]: var-lib-containers-storage-overlay-6a15c62d4bf086e2b1e8a20878b67a3480226f98b01b85e54e525f5a45b8774b-merged.mount: Deactivated successfully.
Jan 20 14:12:09 np0005589310 podman[145955]: 2026-01-20 19:12:09.192009619 +0000 UTC m=+0.203850606 container remove 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 20 14:12:09 np0005589310 systemd[1]: libpod-conmon-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope: Deactivated successfully.
Jan 20 14:12:09 np0005589310 systemd-logind[797]: New session 48 of user zuul.
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.350787548 +0000 UTC m=+0.043278583 container create 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:12:09 np0005589310 systemd[1]: Started Session 48 of User zuul.
Jan 20 14:12:09 np0005589310 systemd[1]: Started libpod-conmon-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope.
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.330738549 +0000 UTC m=+0.023229604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.472876972 +0000 UTC m=+0.165368037 container init 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.48035556 +0000 UTC m=+0.172846595 container start 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.483976917 +0000 UTC m=+0.176467962 container attach 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:09 np0005589310 systemd[1]: Stopping User Manager for UID 0...
Jan 20 14:12:09 np0005589310 systemd[144806]: Activating special unit Exit the Session...
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped target Main User Target.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped target Basic System.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped target Paths.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped target Sockets.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped target Timers.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 14:12:09 np0005589310 systemd[144806]: Closed D-Bus User Message Bus Socket.
Jan 20 14:12:09 np0005589310 systemd[144806]: Stopped Create User's Volatile Files and Directories.
Jan 20 14:12:09 np0005589310 systemd[144806]: Removed slice User Application Slice.
Jan 20 14:12:09 np0005589310 systemd[144806]: Reached target Shutdown.
Jan 20 14:12:09 np0005589310 systemd[144806]: Finished Exit the Session.
Jan 20 14:12:09 np0005589310 systemd[144806]: Reached target Exit the Session.
Jan 20 14:12:09 np0005589310 systemd[1]: user@0.service: Deactivated successfully.
Jan 20 14:12:09 np0005589310 systemd[1]: Stopped User Manager for UID 0.
Jan 20 14:12:09 np0005589310 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 20 14:12:09 np0005589310 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 20 14:12:09 np0005589310 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 20 14:12:09 np0005589310 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 20 14:12:09 np0005589310 systemd[1]: Removed slice User Slice of UID 0.
Jan 20 14:12:09 np0005589310 distracted_greider[146018]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:12:09 np0005589310 distracted_greider[146018]: --> All data devices are unavailable
Jan 20 14:12:09 np0005589310 systemd[1]: libpod-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope: Deactivated successfully.
Jan 20 14:12:09 np0005589310 podman[145998]: 2026-01-20 19:12:09.983473288 +0000 UTC m=+0.675964323 container died 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:12:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515-merged.mount: Deactivated successfully.
Jan 20 14:12:10 np0005589310 podman[145998]: 2026-01-20 19:12:10.035937019 +0000 UTC m=+0.728428054 container remove 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:12:10 np0005589310 systemd[1]: libpod-conmon-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope: Deactivated successfully.
Jan 20 14:12:10 np0005589310 python3.9[146200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.467446057 +0000 UTC m=+0.041297726 container create 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:10 np0005589310 systemd[1]: Started libpod-conmon-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope.
Jan 20 14:12:10 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.450761079 +0000 UTC m=+0.024612758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.548244686 +0000 UTC m=+0.122096375 container init 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.556634306 +0000 UTC m=+0.130485975 container start 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.559669579 +0000 UTC m=+0.133521248 container attach 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:12:10 np0005589310 xenodochial_gates[146281]: 167 167
Jan 20 14:12:10 np0005589310 systemd[1]: libpod-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope: Deactivated successfully.
Jan 20 14:12:10 np0005589310 conmon[146281]: conmon 4de0e951e4d6f8546b19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope/container/memory.events
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.562567448 +0000 UTC m=+0.136419117 container died 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:12:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-083b46913e28819c84f268e660a83acf8846858c6efb062e65d5d2879662afb3-merged.mount: Deactivated successfully.
Jan 20 14:12:10 np0005589310 podman[146263]: 2026-01-20 19:12:10.612828317 +0000 UTC m=+0.186679976 container remove 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:12:10 np0005589310 systemd[1]: libpod-conmon-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope: Deactivated successfully.
Jan 20 14:12:10 np0005589310 podman[146329]: 2026-01-20 19:12:10.77383012 +0000 UTC m=+0.037618809 container create eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:12:10 np0005589310 systemd[1]: Started libpod-conmon-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope.
Jan 20 14:12:10 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:10 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:10 np0005589310 podman[146329]: 2026-01-20 19:12:10.851874562 +0000 UTC m=+0.115663271 container init eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:10 np0005589310 podman[146329]: 2026-01-20 19:12:10.758083894 +0000 UTC m=+0.021872603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:10 np0005589310 podman[146329]: 2026-01-20 19:12:10.860196371 +0000 UTC m=+0.123985060 container start eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:10 np0005589310 podman[146329]: 2026-01-20 19:12:10.864819691 +0000 UTC m=+0.128608390 container attach eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 20 14:12:11 np0005589310 eager_babbage[146350]: {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    "0": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "devices": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "/dev/loop3"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            ],
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_name": "ceph_lv0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_size": "21470642176",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "name": "ceph_lv0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "tags": {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_name": "ceph",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.crush_device_class": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.encrypted": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.objectstore": "bluestore",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_id": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.vdo": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.with_tpm": "0"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            },
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "vg_name": "ceph_vg0"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        }
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    ],
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    "1": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "devices": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "/dev/loop4"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            ],
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_name": "ceph_lv1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_size": "21470642176",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "name": "ceph_lv1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "tags": {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_name": "ceph",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.crush_device_class": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.encrypted": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.objectstore": "bluestore",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_id": "1",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.vdo": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.with_tpm": "0"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            },
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "vg_name": "ceph_vg1"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        }
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    ],
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    "2": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "devices": [
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "/dev/loop5"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            ],
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_name": "ceph_lv2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_size": "21470642176",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "name": "ceph_lv2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "tags": {
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.cluster_name": "ceph",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.crush_device_class": "",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.encrypted": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.objectstore": "bluestore",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osd_id": "2",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.vdo": "0",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:                "ceph.with_tpm": "0"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            },
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "type": "block",
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:            "vg_name": "ceph_vg2"
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:        }
Jan 20 14:12:11 np0005589310 eager_babbage[146350]:    ]
Jan 20 14:12:11 np0005589310 eager_babbage[146350]: }
Jan 20 14:12:11 np0005589310 systemd[1]: libpod-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope: Deactivated successfully.
Jan 20 14:12:11 np0005589310 podman[146329]: 2026-01-20 19:12:11.146712768 +0000 UTC m=+0.410501467 container died eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 14:12:11 np0005589310 systemd[1]: var-lib-containers-storage-overlay-62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330-merged.mount: Deactivated successfully.
Jan 20 14:12:11 np0005589310 podman[146329]: 2026-01-20 19:12:11.192923361 +0000 UTC m=+0.456712060 container remove eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:11 np0005589310 systemd[1]: libpod-conmon-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope: Deactivated successfully.
Jan 20 14:12:11 np0005589310 python3.9[146496]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:11 np0005589310 podman[146591]: 2026-01-20 19:12:11.586691189 +0000 UTC m=+0.024745071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.117667441 +0000 UTC m=+0.555721303 container create 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:12:12 np0005589310 systemd[1]: Started libpod-conmon-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope.
Jan 20 14:12:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.231551339 +0000 UTC m=+0.669605221 container init 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.237915911 +0000 UTC m=+0.675969763 container start 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.241414804 +0000 UTC m=+0.679468666 container attach 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:12:12 np0005589310 funny_napier[146731]: 167 167
Jan 20 14:12:12 np0005589310 systemd[1]: libpod-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope: Deactivated successfully.
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.243185387 +0000 UTC m=+0.681239249 container died 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:12:12 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c682ecb244187bcc1eb163863e12a1f7d3d6ca20073b62e702fc50065e67f573-merged.mount: Deactivated successfully.
Jan 20 14:12:12 np0005589310 podman[146591]: 2026-01-20 19:12:12.278569591 +0000 UTC m=+0.716623453 container remove 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:12:12 np0005589310 python3.9[146723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:12 np0005589310 systemd[1]: libpod-conmon-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope: Deactivated successfully.
Jan 20 14:12:12 np0005589310 podman[146779]: 2026-01-20 19:12:12.42561307 +0000 UTC m=+0.038814367 container create dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:12 np0005589310 systemd[1]: Started libpod-conmon-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope.
Jan 20 14:12:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:12:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:12 np0005589310 podman[146779]: 2026-01-20 19:12:12.409760591 +0000 UTC m=+0.022961908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:12:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:12:12 np0005589310 podman[146779]: 2026-01-20 19:12:12.51567861 +0000 UTC m=+0.128879927 container init dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:12:12 np0005589310 podman[146779]: 2026-01-20 19:12:12.523495216 +0000 UTC m=+0.136696513 container start dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:12:12 np0005589310 podman[146779]: 2026-01-20 19:12:12.527720457 +0000 UTC m=+0.140921754 container attach dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:12:12 np0005589310 python3.9[146928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:13 np0005589310 lvm[147102]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:12:13 np0005589310 lvm[147102]: VG ceph_vg0 finished
Jan 20 14:12:13 np0005589310 lvm[147108]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:12:13 np0005589310 lvm[147108]: VG ceph_vg1 finished
Jan 20 14:12:13 np0005589310 lvm[147128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:12:13 np0005589310 lvm[147128]: VG ceph_vg2 finished
Jan 20 14:12:13 np0005589310 lucid_chaplygin[146842]: {}
Jan 20 14:12:13 np0005589310 systemd[1]: libpod-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Deactivated successfully.
Jan 20 14:12:13 np0005589310 podman[146779]: 2026-01-20 19:12:13.288757049 +0000 UTC m=+0.901958356 container died dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:12:13 np0005589310 systemd[1]: libpod-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Consumed 1.244s CPU time.
Jan 20 14:12:13 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9-merged.mount: Deactivated successfully.
Jan 20 14:12:13 np0005589310 podman[146779]: 2026-01-20 19:12:13.363120434 +0000 UTC m=+0.976321731 container remove dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:12:13 np0005589310 systemd[1]: libpod-conmon-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Deactivated successfully.
Jan 20 14:12:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:12:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:12:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:13 np0005589310 python3.9[147158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:13 np0005589310 python3.9[147349]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.434622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334434731, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 644, "num_deletes": 251, "total_data_size": 770913, "memory_usage": 783592, "flush_reason": "Manual Compaction"}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334443641, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 764202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9096, "largest_seqno": 9739, "table_properties": {"data_size": 760823, "index_size": 1287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7522, "raw_average_key_size": 18, "raw_value_size": 753990, "raw_average_value_size": 1852, "num_data_blocks": 60, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936283, "oldest_key_time": 1768936283, "file_creation_time": 1768936334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9120 microseconds, and 2764 cpu microseconds.
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.443749) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 764202 bytes OK
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.443793) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446521) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446541) EVENT_LOG_v1 {"time_micros": 1768936334446535, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 767474, prev total WAL file size 794290, number of live WAL files 2.
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.447497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(746KB)], [23(7131KB)]
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334447572, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8066841, "oldest_snapshot_seqno": -1}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3325 keys, 6259179 bytes, temperature: kUnknown
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334504040, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6259179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6235183, "index_size": 14607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80568, "raw_average_key_size": 24, "raw_value_size": 6173283, "raw_average_value_size": 1856, "num_data_blocks": 636, "num_entries": 3325, "num_filter_entries": 3325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.504300) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6259179 bytes
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.512345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.0 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(18.7) write-amplify(8.2) OK, records in: 3839, records dropped: 514 output_compression: NoCompression
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.512405) EVENT_LOG_v1 {"time_micros": 1768936334512388, "job": 8, "event": "compaction_finished", "compaction_time_micros": 56546, "compaction_time_cpu_micros": 14144, "output_level": 6, "num_output_files": 1, "total_output_size": 6259179, "num_input_records": 3839, "num_output_records": 3325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334512701, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334514002, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.447313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:12:14 np0005589310 python3.9[147499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:12:15 np0005589310 python3.9[147651]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 14:12:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:16 np0005589310 python3.9[147801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:17 np0005589310 python3.9[147922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936336.2239652-81-280257201345664/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:18 np0005589310 python3.9[148073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:18 np0005589310 python3.9[148194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936338.0199502-96-210961827006769/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:19 np0005589310 python3.9[148346]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:12:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:20 np0005589310 python3.9[148430]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:12:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:23 np0005589310 python3.9[148583]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:12:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:24 np0005589310 python3.9[148736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:24 np0005589310 python3.9[148857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936343.5993252-133-156832045639515/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:25 np0005589310 python3.9[149007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:25 np0005589310 python3.9[149128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936344.633798-133-129267100243576/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:26 np0005589310 python3.9[149278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:27 np0005589310 python3.9[149399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936346.2629704-177-203112403480213/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:27 np0005589310 python3.9[149549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:28 np0005589310 python3.9[149670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936347.3796203-177-95321056905441/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:28 np0005589310 python3.9[149820]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:12:29 np0005589310 python3.9[149974]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:30 np0005589310 ovn_controller[144787]: 2026-01-20T19:12:30Z|00025|memory|INFO|16128 kB peak resident set size after 30.4 seconds
Jan 20 14:12:30 np0005589310 ovn_controller[144787]: 2026-01-20T19:12:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 20 14:12:30 np0005589310 podman[150098]: 2026-01-20 19:12:30.130970111 +0000 UTC m=+0.095599542 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:12:30 np0005589310 python3.9[150146]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:30 np0005589310 python3.9[150230]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:31 np0005589310 python3.9[150382]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:12:31
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:12:31 np0005589310 python3.9[150460]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:32 np0005589310 python3.9[150612]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:32 np0005589310 python3.9[150764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:33 np0005589310 python3.9[150842]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:33 np0005589310 python3.9[150994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:34 np0005589310 python3.9[151072]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:12:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:12:34 np0005589310 python3.9[151224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:12:35 np0005589310 systemd[1]: Reloading.
Jan 20 14:12:35 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:35 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:36 np0005589310 python3.9[151414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:36 np0005589310 python3.9[151492]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:37 np0005589310 python3.9[151644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:37 np0005589310 python3.9[151722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:38 np0005589310 python3.9[151874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:12:38 np0005589310 systemd[1]: Reloading.
Jan 20 14:12:38 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:12:38 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:12:38 np0005589310 systemd[1]: Starting Create netns directory...
Jan 20 14:12:38 np0005589310 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 14:12:38 np0005589310 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 14:12:38 np0005589310 systemd[1]: Finished Create netns directory.
Jan 20 14:12:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:39 np0005589310 python3.9[152066]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:40 np0005589310 python3.9[152218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:40 np0005589310 python3.9[152341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936359.560953-328-127661444211043/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:41 np0005589310 python3.9[152493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:42 np0005589310 python3.9[152645]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:12:42 np0005589310 python3.9[152797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:12:43 np0005589310 python3.9[152920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936362.2793038-361-252779897983029/.source.json _original_basename=.xkh550wa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:43 np0005589310 python3.9[153070]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:12:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:12:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:45 np0005589310 python3.9[153493]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 20 14:12:46 np0005589310 python3.9[153645]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:12:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:47 np0005589310 python3[153797]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:12:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:57 np0005589310 podman[153811]: 2026-01-20 19:12:57.097305835 +0000 UTC m=+9.154866086 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:12:57 np0005589310 podman[153952]: 2026-01-20 19:12:57.227921592 +0000 UTC m=+0.048202061 container create 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 14:12:57 np0005589310 podman[153952]: 2026-01-20 19:12:57.199940024 +0000 UTC m=+0.020220523 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:12:57 np0005589310 python3[153797]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 14:12:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:12:57 np0005589310 python3.9[154140]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:12:58 np0005589310 python3.9[154294]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:12:58 np0005589310 python3.9[154370]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:12:59 np0005589310 python3.9[154521]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936379.047677-439-181628055902260/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:12:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:00 np0005589310 python3.9[154597]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:13:00 np0005589310 systemd[1]: Reloading.
Jan 20 14:13:00 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:13:00 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:13:00 np0005589310 podman[154599]: 2026-01-20 19:13:00.434907808 +0000 UTC m=+0.127244777 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:13:01 np0005589310 python3.9[154733]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:01 np0005589310 systemd[1]: Reloading.
Jan 20 14:13:01 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:13:01 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:13:01 np0005589310 systemd[1]: Starting ovn_metadata_agent container...
Jan 20 14:13:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b505a92993e7866c8202466dc589dba7160bca5ca9a37362c648ac2f6a55d590/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b505a92993e7866c8202466dc589dba7160bca5ca9a37362c648ac2f6a55d590/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:03 np0005589310 systemd[1]: Started /usr/bin/podman healthcheck run 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef.
Jan 20 14:13:03 np0005589310 podman[154774]: 2026-01-20 19:13:03.24114014 +0000 UTC m=+1.516831560 container init 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + sudo -E kolla_set_configs
Jan 20 14:13:03 np0005589310 podman[154774]: 2026-01-20 19:13:03.269117088 +0000 UTC m=+1.544808488 container start 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:13:03 np0005589310 edpm-start-podman-container[154774]: ovn_metadata_agent
Jan 20 14:13:03 np0005589310 edpm-start-podman-container[154773]: Creating additional drop-in dependency for "ovn_metadata_agent" (155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef)
Jan 20 14:13:03 np0005589310 podman[154797]: 2026-01-20 19:13:03.338153765 +0000 UTC m=+0.055473604 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 20 14:13:03 np0005589310 systemd[1]: Reloading.
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Validating config file
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Copying service configuration files
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Writing out command to execute
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: ++ cat /run_command
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + CMD=neutron-ovn-metadata-agent
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + ARGS=
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + sudo kolla_copy_cacerts
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + [[ ! -n '' ]]
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + . kolla_extend_start
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: Running command: 'neutron-ovn-metadata-agent'
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + umask 0022
Jan 20 14:13:03 np0005589310 ovn_metadata_agent[154791]: + exec neutron-ovn-metadata-agent
Jan 20 14:13:03 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:13:03 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:13:03 np0005589310 systemd[1]: Started ovn_metadata_agent container.
Jan 20 14:13:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:04 np0005589310 python3.9[155031]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:05 np0005589310 python3.9[155183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.399 154796 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.400 154796 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.400 154796 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.459 154796 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 15f2b046-37e6-488b-9e52-3d187c798598 (UUID: 15f2b046-37e6-488b-9e52-3d187c798598) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.483 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.488 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.494 154796 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '15f2b046-37e6-488b-9e52-3d187c798598'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb5fc7f0b80>], external_ids={}, name=15f2b046-37e6-488b-9e52-3d187c798598, nb_cfg_timestamp=1768936327737, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.495 154796 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb5fc772c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.497 154796 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.501 154796 DEBUG oslo_service.service [-] Started child 155254 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.504 154796 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpqelov3wn/privsep.sock']#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.504 155254 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-957161'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.527 155254 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.528 155254 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.528 155254 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.532 155254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.539 155254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 20 14:13:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.544 155254 INFO eventlet.wsgi.server [-] (155254) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 20 14:13:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:05 np0005589310 python3.9[155312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936384.7960057-484-143695302364000/.source.yaml _original_basename=.8d6izojx follow=False checksum=47c886dea5e425583a8c1699aae0fd4573459ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:06 np0005589310 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.197 154796 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.199 154796 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqelov3wn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.047 155338 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.052 155338 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.054 155338 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.054 155338 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155338#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.202 155338 DEBUG oslo.privsep.daemon [-] privsep: reply[c015d247-90c3-4336-8e16-f1e738c84e03]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 20 14:13:06 np0005589310 systemd[1]: session-48.scope: Deactivated successfully.
Jan 20 14:13:06 np0005589310 systemd[1]: session-48.scope: Consumed 54.342s CPU time.
Jan 20 14:13:06 np0005589310 systemd-logind[797]: Session 48 logged out. Waiting for processes to exit.
Jan 20 14:13:06 np0005589310 systemd-logind[797]: Removed session 48.
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.770 155338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.771 155338 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:13:06 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.771 155338 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.370 155338 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5b48f1-9436-49bf-9323-90e90dae6ab4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.373 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, column=external_ids, values=({'neutron:ovn-metadata-id': '3059e1c8-eb87-5eb4-929e-9633646f5b0f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.380 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:13:07 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 20 14:13:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:11 np0005589310 systemd-logind[797]: New session 49 of user zuul.
Jan 20 14:13:11 np0005589310 systemd[1]: Started Session 49 of User zuul.
Jan 20 14:13:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:12 np0005589310 python3.9[155496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:13:13 np0005589310 python3.9[155695]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:13:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:14 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:13:15 np0005589310 python3.9[155988]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:13:15 np0005589310 systemd[1]: Reloading.
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.065432146 +0000 UTC m=+0.042395440 container create 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:15 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:13:15 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.045104913 +0000 UTC m=+0.022068207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:15 np0005589310 systemd[1]: Started libpod-conmon-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope.
Jan 20 14:13:15 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.371541083 +0000 UTC m=+0.348504377 container init 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.378893537 +0000 UTC m=+0.355856811 container start 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.382981405 +0000 UTC m=+0.359944679 container attach 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 14:13:15 np0005589310 suspicious_swartz[156079]: 167 167
Jan 20 14:13:15 np0005589310 systemd[1]: libpod-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope: Deactivated successfully.
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.38697919 +0000 UTC m=+0.363942464 container died 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:15 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1682b33082572893d537790bfd409fcd448243b9d85d419954a712d9b9eb5455-merged.mount: Deactivated successfully.
Jan 20 14:13:15 np0005589310 podman[156028]: 2026-01-20 19:13:15.429699576 +0000 UTC m=+0.406662840 container remove 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:13:15 np0005589310 systemd[1]: libpod-conmon-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope: Deactivated successfully.
Jan 20 14:13:15 np0005589310 podman[156177]: 2026-01-20 19:13:15.594930319 +0000 UTC m=+0.041943309 container create 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:13:15 np0005589310 systemd[1]: Started libpod-conmon-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope.
Jan 20 14:13:15 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:15 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:15 np0005589310 podman[156177]: 2026-01-20 19:13:15.665179331 +0000 UTC m=+0.112192371 container init 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:13:15 np0005589310 podman[156177]: 2026-01-20 19:13:15.57691355 +0000 UTC m=+0.023926560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:15 np0005589310 podman[156177]: 2026-01-20 19:13:15.672441275 +0000 UTC m=+0.119454285 container start 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:13:15 np0005589310 podman[156177]: 2026-01-20 19:13:15.676103552 +0000 UTC m=+0.123116572 container attach 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:13:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:16 np0005589310 python3.9[156274]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:13:16 np0005589310 quirky_brown[156194]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:13:16 np0005589310 quirky_brown[156194]: --> All data devices are unavailable
Jan 20 14:13:16 np0005589310 network[156304]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:13:16 np0005589310 network[156305]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:13:16 np0005589310 network[156306]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:13:16 np0005589310 systemd[1]: libpod-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope: Deactivated successfully.
Jan 20 14:13:16 np0005589310 podman[156177]: 2026-01-20 19:13:16.18518745 +0000 UTC m=+0.632200490 container died 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:13:16 np0005589310 systemd[1]: var-lib-containers-storage-overlay-23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99-merged.mount: Deactivated successfully.
Jan 20 14:13:16 np0005589310 podman[156177]: 2026-01-20 19:13:16.862546633 +0000 UTC m=+1.309559623 container remove 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:13:16 np0005589310 systemd[1]: libpod-conmon-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope: Deactivated successfully.
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.323129287 +0000 UTC m=+0.042667318 container create df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:13:17 np0005589310 systemd[1]: Started libpod-conmon-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope.
Jan 20 14:13:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.399155876 +0000 UTC m=+0.118693947 container init df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.304322628 +0000 UTC m=+0.023860689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.40690089 +0000 UTC m=+0.126438921 container start df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.410540247 +0000 UTC m=+0.130078308 container attach df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:13:17 np0005589310 crazy_davinci[156431]: 167 167
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.412048103 +0000 UTC m=+0.131586134 container died df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:13:17 np0005589310 systemd[1]: libpod-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope: Deactivated successfully.
Jan 20 14:13:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e31c9dac90acffbfb3e6a7b89c51be10935fca18ba38f0e2dfbbf185322ba6ea-merged.mount: Deactivated successfully.
Jan 20 14:13:17 np0005589310 podman[156411]: 2026-01-20 19:13:17.450691153 +0000 UTC m=+0.170229184 container remove df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:13:17 np0005589310 systemd[1]: libpod-conmon-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope: Deactivated successfully.
Jan 20 14:13:17 np0005589310 podman[156467]: 2026-01-20 19:13:17.607400433 +0000 UTC m=+0.044369367 container create 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:13:17 np0005589310 systemd[1]: Started libpod-conmon-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope.
Jan 20 14:13:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:17 np0005589310 podman[156467]: 2026-01-20 19:13:17.668967229 +0000 UTC m=+0.105936183 container init 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:13:17 np0005589310 podman[156467]: 2026-01-20 19:13:17.676523038 +0000 UTC m=+0.113491972 container start 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:13:17 np0005589310 podman[156467]: 2026-01-20 19:13:17.680020341 +0000 UTC m=+0.116989425 container attach 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:13:17 np0005589310 podman[156467]: 2026-01-20 19:13:17.586812023 +0000 UTC m=+0.023780977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]: {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    "0": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "devices": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "/dev/loop3"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            ],
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_name": "ceph_lv0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_size": "21470642176",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "name": "ceph_lv0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "tags": {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_name": "ceph",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.crush_device_class": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.encrypted": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.objectstore": "bluestore",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_id": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.vdo": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.with_tpm": "0"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            },
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "vg_name": "ceph_vg0"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        }
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    ],
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    "1": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "devices": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "/dev/loop4"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            ],
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_name": "ceph_lv1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_size": "21470642176",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "name": "ceph_lv1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "tags": {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_name": "ceph",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.crush_device_class": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.encrypted": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.objectstore": "bluestore",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_id": "1",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.vdo": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.with_tpm": "0"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            },
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "vg_name": "ceph_vg1"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        }
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    ],
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    "2": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "devices": [
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "/dev/loop5"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            ],
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_name": "ceph_lv2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_size": "21470642176",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "name": "ceph_lv2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "tags": {
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.cluster_name": "ceph",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.crush_device_class": "",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.encrypted": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.objectstore": "bluestore",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osd_id": "2",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.vdo": "0",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:                "ceph.with_tpm": "0"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            },
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "type": "block",
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:            "vg_name": "ceph_vg2"
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:        }
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]:    ]
Jan 20 14:13:17 np0005589310 zealous_brattain[156488]: }
Jan 20 14:13:17 np0005589310 systemd[1]: libpod-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope: Deactivated successfully.
Jan 20 14:13:18 np0005589310 podman[156500]: 2026-01-20 19:13:18.009429083 +0000 UTC m=+0.030263512 container died 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:13:18 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987-merged.mount: Deactivated successfully.
Jan 20 14:13:18 np0005589310 podman[156500]: 2026-01-20 19:13:18.486480297 +0000 UTC m=+0.507314666 container remove 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:13:18 np0005589310 systemd[1]: libpod-conmon-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope: Deactivated successfully.
Jan 20 14:13:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:18 np0005589310 podman[156589]: 2026-01-20 19:13:18.9822693 +0000 UTC m=+0.043206770 container create a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:13:19 np0005589310 systemd[1]: Started libpod-conmon-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope.
Jan 20 14:13:19 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:18.964491136 +0000 UTC m=+0.025428606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:19.085758322 +0000 UTC m=+0.146695802 container init a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:19.09322127 +0000 UTC m=+0.154158720 container start a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:19 np0005589310 jovial_bassi[156608]: 167 167
Jan 20 14:13:19 np0005589310 systemd[1]: libpod-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope: Deactivated successfully.
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:19.123584672 +0000 UTC m=+0.184522122 container attach a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:19.124133166 +0000 UTC m=+0.185070616 container died a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:13:19 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c9335d83330c4132ba8cf95f760e8a3ab7287836f7fb34e7e27337d3870d52ae-merged.mount: Deactivated successfully.
Jan 20 14:13:19 np0005589310 podman[156589]: 2026-01-20 19:13:19.159573459 +0000 UTC m=+0.220510909 container remove a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:13:19 np0005589310 systemd[1]: libpod-conmon-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope: Deactivated successfully.
Jan 20 14:13:19 np0005589310 podman[156647]: 2026-01-20 19:13:19.33982901 +0000 UTC m=+0.047253226 container create 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:13:19 np0005589310 systemd[1]: Started libpod-conmon-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope.
Jan 20 14:13:19 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:13:19 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:19 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:19 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:19 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:13:19 np0005589310 podman[156647]: 2026-01-20 19:13:19.319453485 +0000 UTC m=+0.026877731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:13:19 np0005589310 podman[156647]: 2026-01-20 19:13:19.422944928 +0000 UTC m=+0.130369154 container init 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:13:19 np0005589310 podman[156647]: 2026-01-20 19:13:19.431208155 +0000 UTC m=+0.138632371 container start 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:13:19 np0005589310 podman[156647]: 2026-01-20 19:13:19.435069737 +0000 UTC m=+0.142493953 container attach 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:13:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:20 np0005589310 lvm[156810]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:13:20 np0005589310 lvm[156810]: VG ceph_vg1 finished
Jan 20 14:13:20 np0005589310 lvm[156802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:13:20 np0005589310 lvm[156802]: VG ceph_vg0 finished
Jan 20 14:13:20 np0005589310 lvm[156822]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:13:20 np0005589310 lvm[156822]: VG ceph_vg2 finished
Jan 20 14:13:20 np0005589310 quirky_payne[156668]: {}
Jan 20 14:13:20 np0005589310 systemd[1]: libpod-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Deactivated successfully.
Jan 20 14:13:20 np0005589310 systemd[1]: libpod-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Consumed 1.324s CPU time.
Jan 20 14:13:20 np0005589310 podman[156647]: 2026-01-20 19:13:20.267907331 +0000 UTC m=+0.975331567 container died 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:13:20 np0005589310 systemd[1]: var-lib-containers-storage-overlay-32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da-merged.mount: Deactivated successfully.
Jan 20 14:13:20 np0005589310 podman[156647]: 2026-01-20 19:13:20.310827163 +0000 UTC m=+1.018251379 container remove 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:13:20 np0005589310 systemd[1]: libpod-conmon-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Deactivated successfully.
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:20 np0005589310 python3.9[156942]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:20 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:13:21 np0005589310 python3.9[157120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:22 np0005589310 python3.9[157273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:23 np0005589310 python3.9[157426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:24 np0005589310 python3.9[157579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:24 np0005589310 python3.9[157732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:25 np0005589310 python3.9[157885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:13:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:26 np0005589310 python3.9[158038]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:26 np0005589310 python3.9[158190]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:27 np0005589310 python3.9[158342]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:27 np0005589310 python3.9[158494]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:28 np0005589310 python3.9[158646]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:29 np0005589310 python3.9[158798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:29 np0005589310 python3.9[158950]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:30 np0005589310 python3.9[159104]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:30 np0005589310 podman[159256]: 2026-01-20 19:13:30.720846494 +0000 UTC m=+0.093839875 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:13:30 np0005589310 python3.9[159257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:31 np0005589310 python3.9[159434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:13:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.71 MB, 0.03 MB/s#012Interval WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:13:31
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups']
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:13:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:31 np0005589310 python3.9[159586]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:32 np0005589310 python3.9[159738]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:33 np0005589310 python3.9[159890]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:33 np0005589310 podman[160014]: 2026-01-20 19:13:33.425210157 +0000 UTC m=+0.052086891 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:13:33 np0005589310 python3.9[160061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:13:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:34 np0005589310 python3.9[160213]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:13:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:13:35 np0005589310 python3.9[160365]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:13:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:13:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s#012Interval WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 20 14:13:35 np0005589310 python3.9[160517]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:13:35 np0005589310 systemd[1]: Reloading.
Jan 20 14:13:35 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:13:35 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:13:36 np0005589310 python3.9[160703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:37 np0005589310 python3.9[160856]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:37 np0005589310 python3.9[161009]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:38 np0005589310 python3.9[161162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:39 np0005589310 python3.9[161315]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:39 np0005589310 python3.9[161468]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:40 np0005589310 python3.9[161621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:13:41 np0005589310 python3.9[161774]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 20 14:13:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:41 np0005589310 python3.9[161927]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:13:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:13:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s#012Interval WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 20 14:13:42 np0005589310 python3.9[162085]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:13:43 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:13:43 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:13:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:43 np0005589310 python3.9[162246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:13:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:13:44 np0005589310 python3.9[162330]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:13:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:46 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 14:13:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:13:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:13:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:01 np0005589310 podman[162517]: 2026-01-20 19:14:01.409654729 +0000 UTC m=+0.084769499 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:14:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:04 np0005589310 podman[162550]: 2026-01-20 19:14:04.392037179 +0000 UTC m=+0.061077586 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.438 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:14:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.439 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:14:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.439 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:14:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:14 np0005589310 kernel: SELinux:  Converting 2774 SID table entries...
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:14:14 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:14:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:20 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:21 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.785488993 +0000 UTC m=+0.045724710 container create ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:14:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:21 np0005589310 systemd[1]: Started libpod-conmon-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope.
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.766081196 +0000 UTC m=+0.026316943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:21 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.894570422 +0000 UTC m=+0.154806169 container init ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.9031114 +0000 UTC m=+0.163347127 container start ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.9080465 +0000 UTC m=+0.168282217 container attach ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:14:21 np0005589310 blissful_blackburn[162743]: 167 167
Jan 20 14:14:21 np0005589310 systemd[1]: libpod-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope: Deactivated successfully.
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.915279059 +0000 UTC m=+0.175514776 container died ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 14:14:21 np0005589310 systemd[1]: var-lib-containers-storage-overlay-75c8f0fa0cc91dac964428a5d773b5b97cfa65c14e4c3b43a534eb345aa986b2-merged.mount: Deactivated successfully.
Jan 20 14:14:21 np0005589310 podman[162726]: 2026-01-20 19:14:21.964471425 +0000 UTC m=+0.224707142 container remove ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:14:21 np0005589310 systemd[1]: libpod-conmon-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope: Deactivated successfully.
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.126763269 +0000 UTC m=+0.046638491 container create 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:14:22 np0005589310 systemd[1]: Started libpod-conmon-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope.
Jan 20 14:14:22 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.10369546 +0000 UTC m=+0.023570712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:22 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.227293849 +0000 UTC m=+0.147169101 container init 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.234572539 +0000 UTC m=+0.154447801 container start 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.24865803 +0000 UTC m=+0.168533272 container attach 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:14:22 np0005589310 angry_lewin[162783]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:14:22 np0005589310 angry_lewin[162783]: --> All data devices are unavailable
Jan 20 14:14:22 np0005589310 systemd[1]: libpod-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope: Deactivated successfully.
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.792282024 +0000 UTC m=+0.712157246 container died 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:14:22 np0005589310 systemd[1]: var-lib-containers-storage-overlay-26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c-merged.mount: Deactivated successfully.
Jan 20 14:14:22 np0005589310 podman[162766]: 2026-01-20 19:14:22.831386557 +0000 UTC m=+0.751261779 container remove 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:14:22 np0005589310 systemd[1]: libpod-conmon-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope: Deactivated successfully.
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.247668629 +0000 UTC m=+0.037829936 container create 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:14:23 np0005589310 systemd[1]: Started libpod-conmon-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope.
Jan 20 14:14:23 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.320978097 +0000 UTC m=+0.111139414 container init 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.230875878 +0000 UTC m=+0.021037175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.326673473 +0000 UTC m=+0.116834780 container start 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.329627078 +0000 UTC m=+0.119788385 container attach 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 14:14:23 np0005589310 peaceful_shtern[162893]: 167 167
Jan 20 14:14:23 np0005589310 systemd[1]: libpod-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope: Deactivated successfully.
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.33147516 +0000 UTC m=+0.121636467 container died 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:14:23 np0005589310 systemd[1]: var-lib-containers-storage-overlay-856c4f61f223d2f72657235249d5e7463168fb9d00ddf17db01de23080ab6bea-merged.mount: Deactivated successfully.
Jan 20 14:14:23 np0005589310 podman[162877]: 2026-01-20 19:14:23.366419821 +0000 UTC m=+0.156581118 container remove 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:14:23 np0005589310 systemd[1]: libpod-conmon-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope: Deactivated successfully.
Jan 20 14:14:23 np0005589310 podman[162916]: 2026-01-20 19:14:23.505806578 +0000 UTC m=+0.022855615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:23 np0005589310 podman[162916]: 2026-01-20 19:14:23.604691632 +0000 UTC m=+0.121740649 container create 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 14:14:23 np0005589310 systemd[1]: Started libpod-conmon-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope.
Jan 20 14:14:23 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:23 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:23 np0005589310 podman[162916]: 2026-01-20 19:14:23.704003605 +0000 UTC m=+0.221052642 container init 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:14:23 np0005589310 podman[162916]: 2026-01-20 19:14:23.710390846 +0000 UTC m=+0.227439863 container start 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:14:23 np0005589310 podman[162916]: 2026-01-20 19:14:23.722200967 +0000 UTC m=+0.239249984 container attach 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 20 14:14:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]: {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    "0": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "devices": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "/dev/loop3"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            ],
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_name": "ceph_lv0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_size": "21470642176",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "name": "ceph_lv0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "tags": {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_name": "ceph",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.crush_device_class": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.encrypted": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.objectstore": "bluestore",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_id": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.vdo": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.with_tpm": "0"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            },
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "vg_name": "ceph_vg0"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        }
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    ],
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    "1": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "devices": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "/dev/loop4"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            ],
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_name": "ceph_lv1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_size": "21470642176",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "name": "ceph_lv1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "tags": {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_name": "ceph",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.crush_device_class": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.encrypted": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.objectstore": "bluestore",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_id": "1",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.vdo": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.with_tpm": "0"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            },
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "vg_name": "ceph_vg1"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        }
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    ],
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    "2": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "devices": [
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "/dev/loop5"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            ],
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_name": "ceph_lv2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_size": "21470642176",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "name": "ceph_lv2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "tags": {
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.cluster_name": "ceph",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.crush_device_class": "",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.encrypted": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.objectstore": "bluestore",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osd_id": "2",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.vdo": "0",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:                "ceph.with_tpm": "0"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            },
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "type": "block",
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:            "vg_name": "ceph_vg2"
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:        }
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]:    ]
Jan 20 14:14:23 np0005589310 hungry_clarke[162933]: }
Jan 20 14:14:24 np0005589310 systemd[1]: libpod-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[162916]: 2026-01-20 19:14:24.015537153 +0000 UTC m=+0.532586170 container died 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:14:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18-merged.mount: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[162916]: 2026-01-20 19:14:24.063038602 +0000 UTC m=+0.580087629 container remove 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:14:24 np0005589310 systemd[1]: libpod-conmon-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.522814895 +0000 UTC m=+0.037669153 container create 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:24 np0005589310 systemd[1]: Started libpod-conmon-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope.
Jan 20 14:14:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.592086574 +0000 UTC m=+0.106940832 container init 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.597996504 +0000 UTC m=+0.112850762 container start 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.600909788 +0000 UTC m=+0.115764066 container attach 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:14:24 np0005589310 youthful_darwin[163032]: 167 167
Jan 20 14:14:24 np0005589310 systemd[1]: libpod-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.601981222 +0000 UTC m=+0.116835480 container died 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.507405585 +0000 UTC m=+0.022259863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ddc0389ca93d8d94030372cb1cda21d5547e45927d8658d4e5c7bf36cbff3120-merged.mount: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[163016]: 2026-01-20 19:14:24.640448582 +0000 UTC m=+0.155302840 container remove 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:14:24 np0005589310 systemd[1]: libpod-conmon-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope: Deactivated successfully.
Jan 20 14:14:24 np0005589310 podman[163055]: 2026-01-20 19:14:24.833611248 +0000 UTC m=+0.071797538 container create 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:14:24 np0005589310 systemd[1]: Started libpod-conmon-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope.
Jan 20 14:14:24 np0005589310 podman[163055]: 2026-01-20 19:14:24.786561128 +0000 UTC m=+0.024747438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:14:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:14:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:14:24 np0005589310 podman[163055]: 2026-01-20 19:14:24.914742778 +0000 UTC m=+0.152929168 container init 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:14:24 np0005589310 podman[163055]: 2026-01-20 19:14:24.925563477 +0000 UTC m=+0.163749767 container start 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:14:25 np0005589310 podman[163055]: 2026-01-20 19:14:25.019517582 +0000 UTC m=+0.257703882 container attach 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:14:25 np0005589310 kernel: SELinux:  Converting 2774 SID table entries...
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:14:25 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:14:25 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 20 14:14:25 np0005589310 lvm[163157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:14:25 np0005589310 lvm[163158]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:14:25 np0005589310 lvm[163158]: VG ceph_vg1 finished
Jan 20 14:14:25 np0005589310 lvm[163157]: VG ceph_vg0 finished
Jan 20 14:14:25 np0005589310 lvm[163160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:14:25 np0005589310 lvm[163160]: VG ceph_vg2 finished
Jan 20 14:14:25 np0005589310 stoic_chebyshev[163072]: {}
Jan 20 14:14:25 np0005589310 podman[163055]: 2026-01-20 19:14:25.695878666 +0000 UTC m=+0.934064956 container died 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:14:25 np0005589310 systemd[1]: libpod-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Deactivated successfully.
Jan 20 14:14:25 np0005589310 systemd[1]: libpod-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Consumed 1.218s CPU time.
Jan 20 14:14:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3-merged.mount: Deactivated successfully.
Jan 20 14:14:25 np0005589310 podman[163055]: 2026-01-20 19:14:25.756980646 +0000 UTC m=+0.995166936 container remove 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:14:25 np0005589310 systemd[1]: libpod-conmon-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Deactivated successfully.
Jan 20 14:14:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:14:25 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:25 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:14:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 14:14:25 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:27 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:27 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:14:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 14:14:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:14:31
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes']
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:14:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:14:32 np0005589310 podman[163202]: 2026-01-20 19:14:32.452578346 +0000 UTC m=+0.108231980 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 20 14:14:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:14:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:14:35 np0005589310 podman[163228]: 2026-01-20 19:14:35.374198906 +0000 UTC m=+0.049502154 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:14:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:14:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 14:14:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 14:14:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:14:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:14:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:14:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:14:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:03 np0005589310 podman[178593]: 2026-01-20 19:15:03.403665761 +0000 UTC m=+0.080566200 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:15:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.440 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:15:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.440 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:15:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.441 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:15:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:06 np0005589310 podman[180124]: 2026-01-20 19:15:06.400834969 +0000 UTC m=+0.080290354 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:15:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:20 np0005589310 kernel: SELinux:  Converting 2775 SID table entries...
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability open_perms=1
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability always_check_network=0
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 14:15:20 np0005589310 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 14:15:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:22 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 14:15:22 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 20 14:15:22 np0005589310 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 14:15:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:26 np0005589310 podman[180351]: 2026-01-20 19:15:26.479302036 +0000 UTC m=+0.074247370 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:15:26 np0005589310 podman[180351]: 2026-01-20 19:15:26.591839598 +0000 UTC m=+0.186784932 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:15:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:15:27 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:15:27 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.541633716 +0000 UTC m=+0.041278928 container create 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:15:28 np0005589310 systemd[1]: Started libpod-conmon-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope.
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.52334819 +0000 UTC m=+0.022993422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:28 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.647232008 +0000 UTC m=+0.146877240 container init 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.655894689 +0000 UTC m=+0.155539901 container start 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.660291086 +0000 UTC m=+0.159936298 container attach 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:15:28 np0005589310 exciting_nash[180846]: 167 167
Jan 20 14:15:28 np0005589310 systemd[1]: libpod-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope: Deactivated successfully.
Jan 20 14:15:28 np0005589310 conmon[180846]: conmon 6ef8c9cb15dfbee19999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope/container/memory.events
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.664628613 +0000 UTC m=+0.164273845 container died 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:28 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1ba4fb6cd5a13a74fe7b08fc996aea0c1503ef7512d2a17f37f5d332c0573b9f-merged.mount: Deactivated successfully.
Jan 20 14:15:28 np0005589310 podman[180829]: 2026-01-20 19:15:28.720954715 +0000 UTC m=+0.220599927 container remove 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 14:15:28 np0005589310 systemd[1]: libpod-conmon-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope: Deactivated successfully.
Jan 20 14:15:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:28 np0005589310 podman[180869]: 2026-01-20 19:15:28.886839676 +0000 UTC m=+0.050114211 container create 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:15:28 np0005589310 systemd[1]: Started libpod-conmon-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope.
Jan 20 14:15:28 np0005589310 podman[180869]: 2026-01-20 19:15:28.867135546 +0000 UTC m=+0.030410081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:28 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:28 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:28 np0005589310 podman[180869]: 2026-01-20 19:15:28.978537541 +0000 UTC m=+0.141812096 container init 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:28 np0005589310 podman[180869]: 2026-01-20 19:15:28.985924221 +0000 UTC m=+0.149198746 container start 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:15:28 np0005589310 podman[180869]: 2026-01-20 19:15:28.989699763 +0000 UTC m=+0.152974318 container attach 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:15:29 np0005589310 intelligent_ganguly[180886]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:15:29 np0005589310 intelligent_ganguly[180886]: --> All data devices are unavailable
Jan 20 14:15:29 np0005589310 systemd[1]: libpod-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope: Deactivated successfully.
Jan 20 14:15:29 np0005589310 podman[180869]: 2026-01-20 19:15:29.462700858 +0000 UTC m=+0.625975383 container died 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:29 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd-merged.mount: Deactivated successfully.
Jan 20 14:15:29 np0005589310 podman[180869]: 2026-01-20 19:15:29.509021476 +0000 UTC m=+0.672296001 container remove 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:15:29 np0005589310 systemd[1]: libpod-conmon-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope: Deactivated successfully.
Jan 20 14:15:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:29 np0005589310 podman[180981]: 2026-01-20 19:15:29.97707517 +0000 UTC m=+0.036540631 container create 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:15:30 np0005589310 systemd[1]: Started libpod-conmon-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope.
Jan 20 14:15:30 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:30.047805823 +0000 UTC m=+0.107271304 container init 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:30.054314472 +0000 UTC m=+0.113779933 container start 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:29.960178049 +0000 UTC m=+0.019643530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:30.058006222 +0000 UTC m=+0.117471703 container attach 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 14:15:30 np0005589310 sweet_carson[181047]: 167 167
Jan 20 14:15:30 np0005589310 systemd[1]: libpod-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope: Deactivated successfully.
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:30.060093153 +0000 UTC m=+0.119558614 container died 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:15:30 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1e2d226889fbb364c343791803bf58c5a0ec1eb07da0309c244f6c7e497bffe8-merged.mount: Deactivated successfully.
Jan 20 14:15:30 np0005589310 podman[180981]: 2026-01-20 19:15:30.094907601 +0000 UTC m=+0.154373062 container remove 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:15:30 np0005589310 systemd[1]: libpod-conmon-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope: Deactivated successfully.
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.24709152 +0000 UTC m=+0.042188169 container create d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:15:30 np0005589310 systemd[1]: Started libpod-conmon-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope.
Jan 20 14:15:30 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:30 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.228289111 +0000 UTC m=+0.023385770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.331767492 +0000 UTC m=+0.126864171 container init d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.337869842 +0000 UTC m=+0.132966491 container start d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.372229638 +0000 UTC m=+0.167326317 container attach d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]: {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    "0": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "devices": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "/dev/loop3"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            ],
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_name": "ceph_lv0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_size": "21470642176",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "name": "ceph_lv0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "tags": {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_name": "ceph",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.crush_device_class": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.encrypted": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.objectstore": "bluestore",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_id": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.vdo": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.with_tpm": "0"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            },
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "vg_name": "ceph_vg0"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        }
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    ],
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    "1": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "devices": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "/dev/loop4"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            ],
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_name": "ceph_lv1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_size": "21470642176",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "name": "ceph_lv1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "tags": {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_name": "ceph",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.crush_device_class": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.encrypted": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.objectstore": "bluestore",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_id": "1",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.vdo": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.with_tpm": "0"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            },
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "vg_name": "ceph_vg1"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        }
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    ],
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    "2": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "devices": [
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "/dev/loop5"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            ],
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_name": "ceph_lv2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_size": "21470642176",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "name": "ceph_lv2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "tags": {
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.cluster_name": "ceph",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.crush_device_class": "",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.encrypted": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.objectstore": "bluestore",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osd_id": "2",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.vdo": "0",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:                "ceph.with_tpm": "0"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            },
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "type": "block",
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:            "vg_name": "ceph_vg2"
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:        }
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]:    ]
Jan 20 14:15:30 np0005589310 compassionate_bose[181288]: }
Jan 20 14:15:30 np0005589310 systemd[1]: libpod-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope: Deactivated successfully.
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.64808348 +0000 UTC m=+0.443180129 container died d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:15:30 np0005589310 systemd[1]: var-lib-containers-storage-overlay-19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136-merged.mount: Deactivated successfully.
Jan 20 14:15:30 np0005589310 podman[181206]: 2026-01-20 19:15:30.692067952 +0000 UTC m=+0.487164611 container remove d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:15:30 np0005589310 systemd[1]: libpod-conmon-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope: Deactivated successfully.
Jan 20 14:15:30 np0005589310 systemd[1]: Stopping OpenSSH server daemon...
Jan 20 14:15:30 np0005589310 systemd[1]: sshd.service: Deactivated successfully.
Jan 20 14:15:30 np0005589310 systemd[1]: Stopped OpenSSH server daemon.
Jan 20 14:15:30 np0005589310 systemd[1]: sshd.service: Consumed 3.350s CPU time, read 564.0K from disk, written 68.0K to disk.
Jan 20 14:15:30 np0005589310 systemd[1]: Stopped target sshd-keygen.target.
Jan 20 14:15:30 np0005589310 systemd[1]: Stopping sshd-keygen.target...
Jan 20 14:15:30 np0005589310 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:15:30 np0005589310 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:15:30 np0005589310 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 14:15:30 np0005589310 systemd[1]: Reached target sshd-keygen.target.
Jan 20 14:15:30 np0005589310 systemd[1]: Starting OpenSSH server daemon...
Jan 20 14:15:30 np0005589310 systemd[1]: Started OpenSSH server daemon.
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.132726538 +0000 UTC m=+0.036620493 container create 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:15:31 np0005589310 systemd[1]: Started libpod-conmon-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope.
Jan 20 14:15:31 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.202201041 +0000 UTC m=+0.106095006 container init 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.210679037 +0000 UTC m=+0.114573032 container start 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.115308964 +0000 UTC m=+0.019202949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:31 np0005589310 interesting_newton[181785]: 167 167
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.214170013 +0000 UTC m=+0.118063998 container attach 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:15:31 np0005589310 systemd[1]: libpod-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope: Deactivated successfully.
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.217470033 +0000 UTC m=+0.121363998 container died 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:15:31 np0005589310 systemd[1]: var-lib-containers-storage-overlay-853f09be80d9311c7bd4e4e6e912ed84fc4fde97b5d4ea3d99b211375922bcbd-merged.mount: Deactivated successfully.
Jan 20 14:15:31 np0005589310 podman[181758]: 2026-01-20 19:15:31.274304337 +0000 UTC m=+0.178198302 container remove 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:15:31 np0005589310 systemd[1]: libpod-conmon-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope: Deactivated successfully.
Jan 20 14:15:31 np0005589310 podman[181833]: 2026-01-20 19:15:31.436023928 +0000 UTC m=+0.039413321 container create 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:15:31 np0005589310 systemd[1]: Started libpod-conmon-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope.
Jan 20 14:15:31 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:15:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:31 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:15:31 np0005589310 podman[181833]: 2026-01-20 19:15:31.419037585 +0000 UTC m=+0.022426998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:15:31 np0005589310 podman[181833]: 2026-01-20 19:15:31.516486918 +0000 UTC m=+0.119876361 container init 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:15:31 np0005589310 podman[181833]: 2026-01-20 19:15:31.523064599 +0000 UTC m=+0.126453992 container start 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:15:31 np0005589310 podman[181833]: 2026-01-20 19:15:31.526897923 +0000 UTC m=+0.130287336 container attach 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:15:31
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'images', 'volumes']
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:15:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:32 np0005589310 lvm[182050]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:15:32 np0005589310 lvm[182048]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:15:32 np0005589310 lvm[182047]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:15:32 np0005589310 lvm[182047]: VG ceph_vg0 finished
Jan 20 14:15:32 np0005589310 lvm[182048]: VG ceph_vg1 finished
Jan 20 14:15:32 np0005589310 lvm[182050]: VG ceph_vg2 finished
Jan 20 14:15:32 np0005589310 frosty_satoshi[181859]: {}
Jan 20 14:15:32 np0005589310 systemd[1]: libpod-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Deactivated successfully.
Jan 20 14:15:32 np0005589310 systemd[1]: libpod-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Consumed 1.287s CPU time.
Jan 20 14:15:32 np0005589310 podman[181833]: 2026-01-20 19:15:32.364231504 +0000 UTC m=+0.967620897 container died 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 20 14:15:32 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe-merged.mount: Deactivated successfully.
Jan 20 14:15:32 np0005589310 podman[181833]: 2026-01-20 19:15:32.409428785 +0000 UTC m=+1.012818178 container remove 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:15:32 np0005589310 systemd[1]: libpod-conmon-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Deactivated successfully.
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:32 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:15:32 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:15:32 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:32 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:15:32 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:32 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:33 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:15:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:34 np0005589310 podman[183726]: 2026-01-20 19:15:34.435674205 +0000 UTC m=+0.102912218 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:15:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:15:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:36 np0005589310 podman[186485]: 2026-01-20 19:15:36.628263639 +0000 UTC m=+0.051917586 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:15:37 np0005589310 python3.9[187113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:15:37 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:37 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:37 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:38 np0005589310 python3.9[188507]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:15:38 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:38 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:38 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:39 np0005589310 python3.9[189868]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:15:39 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:39 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:39 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:40 np0005589310 python3.9[191203]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:15:40 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:40 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:40 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:40 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:15:40 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:15:40 np0005589310 systemd[1]: man-db-cache-update.service: Consumed 9.622s CPU time.
Jan 20 14:15:40 np0005589310 systemd[1]: run-r612b4a7e4bb1452bbb080a765c554355.service: Deactivated successfully.
Jan 20 14:15:41 np0005589310 python3.9[191527]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:41 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:41 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:41 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:42 np0005589310 python3.9[191718]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:42 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:42 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:42 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:44 np0005589310 python3.9[191908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:44 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:44 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:44 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:15:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:15:45 np0005589310 python3.9[192099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:46 np0005589310 python3.9[192254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:46 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:46 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:46 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:47 np0005589310 python3.9[192445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 14:15:47 np0005589310 systemd[1]: Reloading.
Jan 20 14:15:47 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:15:47 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:15:47 np0005589310 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 20 14:15:47 np0005589310 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 20 14:15:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:48 np0005589310 python3.9[192638]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:49 np0005589310 python3.9[192793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:49 np0005589310 python3.9[192948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:50 np0005589310 python3.9[193103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:51 np0005589310 python3.9[193258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:52 np0005589310 python3.9[193413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:52 np0005589310 python3.9[193568]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:53 np0005589310 python3.9[193723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:54 np0005589310 python3.9[193878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:55 np0005589310 python3.9[194033]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:55 np0005589310 python3.9[194188]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:56 np0005589310 python3.9[194343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:57 np0005589310 python3.9[194498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:15:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:15:59 np0005589310 python3.9[194653]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 14:15:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:00 np0005589310 python3.9[194808]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:00 np0005589310 python3.9[194960]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:01 np0005589310 python3.9[195112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:01 np0005589310 python3.9[195264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:02 np0005589310 python3.9[195416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:02 np0005589310 python3.9[195568]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:16:03 np0005589310 python3.9[195718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:16:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:04 np0005589310 python3.9[195870]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:05 np0005589310 podman[195967]: 2026-01-20 19:16:05.001476262 +0000 UTC m=+0.109701264 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:05 np0005589310 python3.9[196012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936563.8117375-557-237959807662936/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.441 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:16:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:16:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:16:05 np0005589310 python3.9[196171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:06 np0005589310 python3.9[196296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936565.2694316-557-139164561759388/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:06 np0005589310 podman[196420]: 2026-01-20 19:16:06.78470076 +0000 UTC m=+0.064533593 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 14:16:06 np0005589310 python3.9[196467]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:07 np0005589310 python3.9[196592]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936566.4129913-557-204330862314184/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:08 np0005589310 python3.9[196744]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:08 np0005589310 python3.9[196869]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936567.647269-557-101964725429641/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:09 np0005589310 python3.9[197021]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:09 np0005589310 python3.9[197146]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936568.7885964-557-91731422437573/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.930285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569930315, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3606422, "memory_usage": 3657432, "flush_reason": "Manual Compaction"}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569947699, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3530265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9740, "largest_seqno": 11777, "table_properties": {"data_size": 3520917, "index_size": 5970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17725, "raw_average_key_size": 19, "raw_value_size": 3502532, "raw_average_value_size": 3840, "num_data_blocks": 271, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936334, "oldest_key_time": 1768936334, "file_creation_time": 1768936569, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17483 microseconds, and 6915 cpu microseconds.
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.947766) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3530265 bytes OK
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.947786) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949229) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949244) EVENT_LOG_v1 {"time_micros": 1768936569949240, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949260) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3597929, prev total WAL file size 3597929, number of live WAL files 2.
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.950533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3447KB)], [26(6112KB)]
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569950600, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9789444, "oldest_snapshot_seqno": -1}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3723 keys, 8171744 bytes, temperature: kUnknown
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569995759, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8171744, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8143215, "index_size": 18115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89368, "raw_average_key_size": 24, "raw_value_size": 8072413, "raw_average_value_size": 2168, "num_data_blocks": 784, "num_entries": 3723, "num_filter_entries": 3723, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936569, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.995996) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8171744 bytes
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.997937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.5 rd, 180.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4237, records dropped: 514 output_compression: NoCompression
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.997960) EVENT_LOG_v1 {"time_micros": 1768936569997949, "job": 10, "event": "compaction_finished", "compaction_time_micros": 45226, "compaction_time_cpu_micros": 17245, "output_level": 6, "num_output_files": 1, "total_output_size": 8171744, "num_input_records": 4237, "num_output_records": 3723, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569998652, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569999609, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.950423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:16:10 np0005589310 python3.9[197298]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:10 np0005589310 python3.9[197423]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936569.9683702-557-25540099043272/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:11 np0005589310 python3.9[197575]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:11 np0005589310 python3.9[197698]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936571.0617537-557-79250257004656/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:12 np0005589310 python3.9[197850]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:13 np0005589310 python3.9[197975]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936572.135374-557-75956552410966/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:13 np0005589310 python3.9[198127]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 20 14:16:14 np0005589310 python3.9[198280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:15 np0005589310 python3.9[198432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:15 np0005589310 python3.9[198584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:16 np0005589310 python3.9[198736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:16 np0005589310 python3.9[198888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:17 np0005589310 python3.9[199040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:18 np0005589310 python3.9[199192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:18 np0005589310 python3.9[199344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:19 np0005589310 python3.9[199496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:20 np0005589310 python3.9[199648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:20 np0005589310 python3.9[199800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:21 np0005589310 python3.9[199952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:21 np0005589310 python3.9[200104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:22 np0005589310 python3.9[200256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:23 np0005589310 python3.9[200408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:23 np0005589310 python3.9[200531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936582.8093393-778-120381698394021/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:24 np0005589310 python3.9[200683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:24 np0005589310 python3.9[200806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936583.9970818-778-269894785348466/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:25 np0005589310 python3.9[200958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:26 np0005589310 python3.9[201081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936585.107667-778-112668106749677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:26 np0005589310 python3.9[201233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:27 np0005589310 python3.9[201356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936586.315435-778-64109133258443/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:28 np0005589310 python3.9[201508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:28 np0005589310 python3.9[201631]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936587.6177828-778-91431063390145/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:29 np0005589310 python3.9[201783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:29 np0005589310 python3.9[201906]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936588.9957635-778-137321479195980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:30 np0005589310 python3.9[202058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:31 np0005589310 python3.9[202183]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936590.0700088-778-101568891489931/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:16:31
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:16:31 np0005589310 python3.9[202335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:32 np0005589310 python3.9[202458]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936591.2716708-778-27383185228753/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:32 np0005589310 python3.9[202632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:16:33 np0005589310 python3.9[202837]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936592.3218641-778-241745188474360/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.570995564 +0000 UTC m=+0.042348364 container create 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:33 np0005589310 systemd[1]: Started libpod-conmon-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope.
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.55072134 +0000 UTC m=+0.022074160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:33 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.688120939 +0000 UTC m=+0.159473759 container init 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.696091094 +0000 UTC m=+0.167443894 container start 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.699743983 +0000 UTC m=+0.171096783 container attach 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:16:33 np0005589310 mystifying_meninsky[202963]: 167 167
Jan 20 14:16:33 np0005589310 systemd[1]: libpod-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope: Deactivated successfully.
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.702600583 +0000 UTC m=+0.173953383 container died 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 14:16:33 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f7f2826f13a3909b2daa5671f409423d36ad726476113c67281ba41c132b8188-merged.mount: Deactivated successfully.
Jan 20 14:16:33 np0005589310 podman[202901]: 2026-01-20 19:16:33.749615788 +0000 UTC m=+0.220968588 container remove 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:16:33 np0005589310 systemd[1]: libpod-conmon-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope: Deactivated successfully.
Jan 20 14:16:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:33 np0005589310 podman[203069]: 2026-01-20 19:16:33.906053792 +0000 UTC m=+0.036804088 container create d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:16:33 np0005589310 systemd[1]: Started libpod-conmon-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope.
Jan 20 14:16:33 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:33 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:33 np0005589310 podman[203069]: 2026-01-20 19:16:33.891177189 +0000 UTC m=+0.021927505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:33 np0005589310 podman[203069]: 2026-01-20 19:16:33.987850566 +0000 UTC m=+0.118600892 container init d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:16:33 np0005589310 podman[203069]: 2026-01-20 19:16:33.995709928 +0000 UTC m=+0.126460224 container start d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:16:34 np0005589310 podman[203069]: 2026-01-20 19:16:34.005587589 +0000 UTC m=+0.136337915 container attach d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:16:34 np0005589310 python3.9[203063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:34 np0005589310 flamboyant_jackson[203086]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:16:34 np0005589310 flamboyant_jackson[203086]: --> All data devices are unavailable
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:16:34 np0005589310 systemd[1]: libpod-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope: Deactivated successfully.
Jan 20 14:16:34 np0005589310 podman[203069]: 2026-01-20 19:16:34.493294788 +0000 UTC m=+0.624045084 container died d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:16:34 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3-merged.mount: Deactivated successfully.
Jan 20 14:16:34 np0005589310 podman[203069]: 2026-01-20 19:16:34.540170642 +0000 UTC m=+0.670920938 container remove d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:16:34 np0005589310 systemd[1]: libpod-conmon-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope: Deactivated successfully.
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:16:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:16:34 np0005589310 python3.9[203225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936593.5707355-778-279462881777694/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:34.948702811 +0000 UTC m=+0.019955648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:35 np0005589310 python3.9[203471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:35 np0005589310 auditd[702]: Audit daemon rotating log files
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.359407733 +0000 UTC m=+0.430660550 container create e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 14:16:35 np0005589310 systemd[1]: Started libpod-conmon-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope.
Jan 20 14:16:35 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:35 np0005589310 podman[203508]: 2026-01-20 19:16:35.436330159 +0000 UTC m=+0.110451424 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.458762916 +0000 UTC m=+0.530015763 container init e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.468151545 +0000 UTC m=+0.539404352 container start e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.471589908 +0000 UTC m=+0.542842755 container attach e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:16:35 np0005589310 great_kepler[203563]: 167 167
Jan 20 14:16:35 np0005589310 systemd[1]: libpod-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope: Deactivated successfully.
Jan 20 14:16:35 np0005589310 conmon[203563]: conmon e155e937769e3514460f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope/container/memory.events
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.478005395 +0000 UTC m=+0.549258202 container died e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:16:35 np0005589310 systemd[1]: var-lib-containers-storage-overlay-2e05e4dc795773f66da6fe1a9d029bad5de50a174672c47b7f63992547eac89d-merged.mount: Deactivated successfully.
Jan 20 14:16:35 np0005589310 podman[203431]: 2026-01-20 19:16:35.518114392 +0000 UTC m=+0.589367209 container remove e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:16:35 np0005589310 systemd[1]: libpod-conmon-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope: Deactivated successfully.
Jan 20 14:16:35 np0005589310 podman[203646]: 2026-01-20 19:16:35.674564797 +0000 UTC m=+0.041998755 container create 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:16:35 np0005589310 systemd[1]: Started libpod-conmon-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope.
Jan 20 14:16:35 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:35 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:35 np0005589310 python3.9[203640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936594.7216325-778-41189754982359/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:35 np0005589310 podman[203646]: 2026-01-20 19:16:35.655937183 +0000 UTC m=+0.023371161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:35 np0005589310 podman[203646]: 2026-01-20 19:16:35.760324677 +0000 UTC m=+0.127758655 container init 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:16:35 np0005589310 podman[203646]: 2026-01-20 19:16:35.76741808 +0000 UTC m=+0.134852038 container start 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:35 np0005589310 podman[203646]: 2026-01-20 19:16:35.770442163 +0000 UTC m=+0.137876141 container attach 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:16:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]: {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    "0": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "devices": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "/dev/loop3"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            ],
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_name": "ceph_lv0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_size": "21470642176",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "name": "ceph_lv0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "tags": {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_name": "ceph",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.crush_device_class": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.encrypted": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.objectstore": "bluestore",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_id": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.vdo": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.with_tpm": "0"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            },
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "vg_name": "ceph_vg0"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        }
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    ],
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    "1": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "devices": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "/dev/loop4"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            ],
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_name": "ceph_lv1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_size": "21470642176",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "name": "ceph_lv1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "tags": {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_name": "ceph",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.crush_device_class": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.encrypted": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.objectstore": "bluestore",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_id": "1",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.vdo": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.with_tpm": "0"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            },
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "vg_name": "ceph_vg1"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        }
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    ],
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    "2": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "devices": [
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "/dev/loop5"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            ],
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_name": "ceph_lv2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_size": "21470642176",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "name": "ceph_lv2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "tags": {
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.cluster_name": "ceph",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.crush_device_class": "",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.encrypted": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.objectstore": "bluestore",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osd_id": "2",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.vdo": "0",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:                "ceph.with_tpm": "0"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            },
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "type": "block",
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:            "vg_name": "ceph_vg2"
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:        }
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]:    ]
Jan 20 14:16:36 np0005589310 kind_dubinsky[203662]: }
Jan 20 14:16:36 np0005589310 systemd[1]: libpod-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope: Deactivated successfully.
Jan 20 14:16:36 np0005589310 podman[203646]: 2026-01-20 19:16:36.094891093 +0000 UTC m=+0.462325051 container died 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:36 np0005589310 systemd[1]: var-lib-containers-storage-overlay-8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a-merged.mount: Deactivated successfully.
Jan 20 14:16:36 np0005589310 podman[203646]: 2026-01-20 19:16:36.136227372 +0000 UTC m=+0.503661330 container remove 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:16:36 np0005589310 systemd[1]: libpod-conmon-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope: Deactivated successfully.
Jan 20 14:16:36 np0005589310 python3.9[203834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.543693605 +0000 UTC m=+0.039920595 container create 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:16:36 np0005589310 systemd[1]: Started libpod-conmon-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope.
Jan 20 14:16:36 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.525733387 +0000 UTC m=+0.021960387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.644596265 +0000 UTC m=+0.140823255 container init 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.652328473 +0000 UTC m=+0.148555463 container start 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:16:36 np0005589310 determined_sutherland[203982]: 167 167
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.657046309 +0000 UTC m=+0.153273299 container attach 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:36 np0005589310 systemd[1]: libpod-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope: Deactivated successfully.
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.657629432 +0000 UTC m=+0.153856422 container died 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:16:36 np0005589310 systemd[1]: var-lib-containers-storage-overlay-545714ba0b27f4d0b001166e9a12a83f466210970902577e607b109b74fb749e-merged.mount: Deactivated successfully.
Jan 20 14:16:36 np0005589310 podman[203943]: 2026-01-20 19:16:36.711151777 +0000 UTC m=+0.207378767 container remove 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:16:36 np0005589310 systemd[1]: libpod-conmon-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope: Deactivated successfully.
Jan 20 14:16:36 np0005589310 podman[204058]: 2026-01-20 19:16:36.880339162 +0000 UTC m=+0.055558665 container create 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:16:36 np0005589310 python3.9[204050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936595.9211617-778-109271389805803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:36 np0005589310 systemd[1]: Started libpod-conmon-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope.
Jan 20 14:16:36 np0005589310 podman[204058]: 2026-01-20 19:16:36.849545021 +0000 UTC m=+0.024764554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:16:36 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:16:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:36 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:16:36 np0005589310 podman[204058]: 2026-01-20 19:16:36.973010391 +0000 UTC m=+0.148229914 container init 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 14:16:36 np0005589310 podman[204058]: 2026-01-20 19:16:36.980542075 +0000 UTC m=+0.155761578 container start 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:16:36 np0005589310 podman[204058]: 2026-01-20 19:16:36.983979198 +0000 UTC m=+0.159198731 container attach 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:16:36 np0005589310 podman[204072]: 2026-01-20 19:16:36.996915863 +0000 UTC m=+0.082411839 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:16:37 np0005589310 python3.9[204261]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:37 np0005589310 lvm[204390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:16:37 np0005589310 lvm[204391]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:16:37 np0005589310 lvm[204390]: VG ceph_vg0 finished
Jan 20 14:16:37 np0005589310 lvm[204391]: VG ceph_vg1 finished
Jan 20 14:16:37 np0005589310 lvm[204396]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:16:37 np0005589310 lvm[204396]: VG ceph_vg2 finished
Jan 20 14:16:37 np0005589310 tender_lovelace[204086]: {}
Jan 20 14:16:37 np0005589310 systemd[1]: libpod-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Deactivated successfully.
Jan 20 14:16:37 np0005589310 systemd[1]: libpod-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Consumed 1.303s CPU time.
Jan 20 14:16:37 np0005589310 podman[204058]: 2026-01-20 19:16:37.821572868 +0000 UTC m=+0.996792391 container died 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:16:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:37 np0005589310 python3.9[204451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936597.0309184-778-143343513104843/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:38 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903-merged.mount: Deactivated successfully.
Jan 20 14:16:38 np0005589310 podman[204058]: 2026-01-20 19:16:38.051119434 +0000 UTC m=+1.226338937 container remove 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:16:38 np0005589310 systemd[1]: libpod-conmon-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Deactivated successfully.
Jan 20 14:16:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:16:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:16:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:38 np0005589310 python3.9[204640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:39 np0005589310 python3.9[204763]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936598.1152465-778-276303699404132/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:16:39 np0005589310 python3.9[204913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:40 np0005589310 python3.9[205068]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 20 14:16:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:41 np0005589310 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 20 14:16:42 np0005589310 python3.9[205224]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:42 np0005589310 python3.9[205376]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:43 np0005589310 python3.9[205528]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:44 np0005589310 python3.9[205680]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:16:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:16:44 np0005589310 python3.9[205832]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:45 np0005589310 python3.9[205984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:45 np0005589310 python3.9[206136]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:46 np0005589310 python3.9[206288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:47 np0005589310 python3.9[206440]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:47 np0005589310 python3.9[206592]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:48 np0005589310 python3.9[206744]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:48 np0005589310 systemd[1]: Reloading.
Jan 20 14:16:48 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:48 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:49 np0005589310 systemd[1]: Starting libvirt logging daemon socket...
Jan 20 14:16:49 np0005589310 systemd[1]: Listening on libvirt logging daemon socket.
Jan 20 14:16:49 np0005589310 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 20 14:16:49 np0005589310 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 20 14:16:49 np0005589310 systemd[1]: Starting libvirt logging daemon...
Jan 20 14:16:49 np0005589310 systemd[1]: Started libvirt logging daemon.
Jan 20 14:16:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:50 np0005589310 python3.9[206937]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:50 np0005589310 systemd[1]: Reloading.
Jan 20 14:16:50 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:50 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:50 np0005589310 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 20 14:16:50 np0005589310 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 20 14:16:50 np0005589310 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 20 14:16:50 np0005589310 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 20 14:16:50 np0005589310 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 20 14:16:50 np0005589310 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 20 14:16:50 np0005589310 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:16:50 np0005589310 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:16:51 np0005589310 python3.9[207152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:51 np0005589310 systemd[1]: Reloading.
Jan 20 14:16:51 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:51 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:51 np0005589310 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 20 14:16:51 np0005589310 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 20 14:16:51 np0005589310 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 20 14:16:51 np0005589310 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 20 14:16:51 np0005589310 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 20 14:16:51 np0005589310 systemd[1]: Starting libvirt proxy daemon...
Jan 20 14:16:51 np0005589310 systemd[1]: Started libvirt proxy daemon.
Jan 20 14:16:51 np0005589310 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 20 14:16:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:51 np0005589310 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 20 14:16:51 np0005589310 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 20 14:16:52 np0005589310 python3.9[207371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:52 np0005589310 systemd[1]: Reloading.
Jan 20 14:16:52 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:52 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:52 np0005589310 systemd[1]: Listening on libvirt locking daemon socket.
Jan 20 14:16:52 np0005589310 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 20 14:16:52 np0005589310 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 20 14:16:52 np0005589310 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 20 14:16:52 np0005589310 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 20 14:16:52 np0005589310 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 20 14:16:52 np0005589310 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 20 14:16:52 np0005589310 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 20 14:16:52 np0005589310 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 20 14:16:52 np0005589310 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 20 14:16:52 np0005589310 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:16:52 np0005589310 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:16:52 np0005589310 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d4fe53a8-32aa-419d-b759-530cad4fb2a7
Jan 20 14:16:52 np0005589310 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 20 14:16:52 np0005589310 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d4fe53a8-32aa-419d-b759-530cad4fb2a7
Jan 20 14:16:52 np0005589310 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 20 14:16:53 np0005589310 python3.9[207589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:16:53 np0005589310 systemd[1]: Reloading.
Jan 20 14:16:53 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:16:53 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:16:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:53 np0005589310 systemd[1]: Starting libvirt secret daemon socket...
Jan 20 14:16:53 np0005589310 systemd[1]: Listening on libvirt secret daemon socket.
Jan 20 14:16:53 np0005589310 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 20 14:16:53 np0005589310 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 20 14:16:53 np0005589310 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 20 14:16:53 np0005589310 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 20 14:16:53 np0005589310 systemd[1]: Starting libvirt secret daemon...
Jan 20 14:16:53 np0005589310 systemd[1]: Started libvirt secret daemon.
Jan 20 14:16:54 np0005589310 python3.9[207801]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:55 np0005589310 python3.9[207953]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:16:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:55 np0005589310 python3.9[208105]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:56 np0005589310 python3.9[208259]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:16:57 np0005589310 python3.9[208409]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:16:57 np0005589310 python3.9[208530]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936616.9172997-1136-227311713050597/.source.xml follow=False _original_basename=secret.xml.j2 checksum=df9391033abbde40fc5cbff8cb85e1f03e415e51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:16:58 np0005589310 python3.9[208682]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 90fff835-31df-513f-a409-b6642f04e6ac#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:16:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:16:59 np0005589310 python3.9[208844]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:16:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:01 np0005589310 python3.9[209307]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:01 np0005589310 python3.9[209459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:02 np0005589310 python3.9[209582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936621.31131-1191-279479736714625/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:02 np0005589310 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 20 14:17:02 np0005589310 python3.9[209734]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:02 np0005589310 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 20 14:17:03 np0005589310 python3.9[209886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:03 np0005589310 python3.9[209964]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:04 np0005589310 python3.9[210116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:05 np0005589310 python3.9[210194]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ouxlbozi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:17:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.443 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:17:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:17:05 np0005589310 podman[210318]: 2026-01-20 19:17:05.626204783 +0000 UTC m=+0.118746656 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:17:05 np0005589310 python3.9[210369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:06 np0005589310 python3.9[210450]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:06 np0005589310 python3.9[210602]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:07 np0005589310 podman[210680]: 2026-01-20 19:17:07.375978961 +0000 UTC m=+0.051819825 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 14:17:07 np0005589310 python3[210776]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 14:17:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:08 np0005589310 python3.9[210928]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:08 np0005589310 python3.9[211006]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:09 np0005589310 python3.9[211158]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:09 np0005589310 python3.9[211283]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936628.9728224-1280-255564206229718/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:10 np0005589310 python3.9[211435]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:11 np0005589310 python3.9[211513]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:11 np0005589310 python3.9[211665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:12 np0005589310 python3.9[211743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:12 np0005589310 python3.9[211895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:13 np0005589310 python3.9[212020]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936632.2872982-1319-135757056697883/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:14 np0005589310 python3.9[212172]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:14 np0005589310 python3.9[212324]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:15 np0005589310 python3.9[212479]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:16 np0005589310 python3.9[212631]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:17 np0005589310 python3.9[212784]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:17 np0005589310 python3.9[212938]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:18 np0005589310 python3.9[213093]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:19 np0005589310 python3.9[213245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:19 np0005589310 python3.9[213368]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936638.7992146-1391-153261910856164/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:20 np0005589310 python3.9[213520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:21 np0005589310 python3.9[213643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936640.1311092-1406-218425547945734/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:22 np0005589310 python3.9[213795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:22 np0005589310 python3.9[213918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936641.4991426-1421-71834319203613/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:23 np0005589310 python3.9[214070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:17:23 np0005589310 systemd[1]: Reloading.
Jan 20 14:17:23 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:23 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:23 np0005589310 systemd[1]: Reached target edpm_libvirt.target.
Jan 20 14:17:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:24 np0005589310 python3.9[214261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 14:17:24 np0005589310 systemd[1]: Reloading.
Jan 20 14:17:24 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:24 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:25 np0005589310 systemd[1]: Reloading.
Jan 20 14:17:25 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:25 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:25 np0005589310 systemd[1]: session-49.scope: Deactivated successfully.
Jan 20 14:17:25 np0005589310 systemd[1]: session-49.scope: Consumed 3min 26.910s CPU time.
Jan 20 14:17:25 np0005589310 systemd-logind[797]: Session 49 logged out. Waiting for processes to exit.
Jan 20 14:17:25 np0005589310 systemd-logind[797]: Removed session 49.
Jan 20 14:17:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:31 np0005589310 systemd-logind[797]: New session 50 of user zuul.
Jan 20 14:17:31 np0005589310 systemd[1]: Started Session 50 of User zuul.
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:17:31
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'volumes', '.rgw.root']
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:17:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:32 np0005589310 python3.9[214511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:17:33 np0005589310 python3.9[214665]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:17:33 np0005589310 network[214682]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:17:33 np0005589310 network[214683]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:17:33 np0005589310 network[214684]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:17:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:17:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:17:35 np0005589310 podman[214769]: 2026-01-20 19:17:35.796293621 +0000 UTC m=+0.110571310 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:17:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:37 np0005589310 python3.9[214983]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 14:17:37 np0005589310 podman[214992]: 2026-01-20 19:17:37.544843766 +0000 UTC m=+0.052269281 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:17:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:38 np0005589310 python3.9[215086]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:17:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:17:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:39 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.181694554 +0000 UTC m=+0.034423673 container create 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:17:39 np0005589310 systemd[1]: Started libpod-conmon-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope.
Jan 20 14:17:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.255670967 +0000 UTC m=+0.108400086 container init 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.166575955 +0000 UTC m=+0.019305094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.265522233 +0000 UTC m=+0.118251362 container start 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.269248916 +0000 UTC m=+0.121978055 container attach 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:17:39 np0005589310 flamboyant_kapitsa[215244]: 167 167
Jan 20 14:17:39 np0005589310 systemd[1]: libpod-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope: Deactivated successfully.
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.275440322 +0000 UTC m=+0.128169471 container died 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:17:39 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d1cbf375ac62b95710ad675672a4ca04303e9a4cd4da421b92adb2070db30e8d-merged.mount: Deactivated successfully.
Jan 20 14:17:39 np0005589310 podman[215228]: 2026-01-20 19:17:39.585790262 +0000 UTC m=+0.438519391 container remove 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:17:39 np0005589310 systemd[1]: libpod-conmon-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope: Deactivated successfully.
Jan 20 14:17:39 np0005589310 podman[215267]: 2026-01-20 19:17:39.754664712 +0000 UTC m=+0.039927282 container create e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:17:39 np0005589310 systemd[1]: Started libpod-conmon-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope.
Jan 20 14:17:39 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:39 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:39 np0005589310 podman[215267]: 2026-01-20 19:17:39.739885951 +0000 UTC m=+0.025148551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:39 np0005589310 podman[215267]: 2026-01-20 19:17:39.843033374 +0000 UTC m=+0.128295954 container init e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:17:39 np0005589310 podman[215267]: 2026-01-20 19:17:39.854712286 +0000 UTC m=+0.139974846 container start e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:39 np0005589310 podman[215267]: 2026-01-20 19:17:39.874389139 +0000 UTC m=+0.159651729 container attach e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:17:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:40 np0005589310 competent_mendel[215284]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:17:40 np0005589310 competent_mendel[215284]: --> All data devices are unavailable
Jan 20 14:17:40 np0005589310 systemd[1]: libpod-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope: Deactivated successfully.
Jan 20 14:17:40 np0005589310 podman[215267]: 2026-01-20 19:17:40.327977837 +0000 UTC m=+0.613240427 container died e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:17:40 np0005589310 systemd[1]: var-lib-containers-storage-overlay-436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828-merged.mount: Deactivated successfully.
Jan 20 14:17:40 np0005589310 podman[215267]: 2026-01-20 19:17:40.374202845 +0000 UTC m=+0.659465425 container remove e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:17:40 np0005589310 systemd[1]: libpod-conmon-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope: Deactivated successfully.
Jan 20 14:17:40 np0005589310 podman[215377]: 2026-01-20 19:17:40.849921677 +0000 UTC m=+0.038140875 container create 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:17:40 np0005589310 podman[215377]: 2026-01-20 19:17:40.831597629 +0000 UTC m=+0.019816857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:40 np0005589310 systemd[1]: Started libpod-conmon-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope.
Jan 20 14:17:40 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:41 np0005589310 podman[215377]: 2026-01-20 19:17:41.000280883 +0000 UTC m=+0.188500091 container init 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:17:41 np0005589310 podman[215377]: 2026-01-20 19:17:41.006822416 +0000 UTC m=+0.195041614 container start 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:17:41 np0005589310 intelligent_mestorf[215393]: 167 167
Jan 20 14:17:41 np0005589310 systemd[1]: libpod-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope: Deactivated successfully.
Jan 20 14:17:41 np0005589310 podman[215377]: 2026-01-20 19:17:41.011220166 +0000 UTC m=+0.199439384 container attach 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:17:41 np0005589310 podman[215377]: 2026-01-20 19:17:41.013833792 +0000 UTC m=+0.202052990 container died 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:17:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9585ac62579d0c488ac7f1d4cb79841924b053dae1f7ea692a23cd1a7b58388b-merged.mount: Deactivated successfully.
Jan 20 14:17:41 np0005589310 podman[215377]: 2026-01-20 19:17:41.062234324 +0000 UTC m=+0.250453522 container remove 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:17:41 np0005589310 systemd[1]: libpod-conmon-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope: Deactivated successfully.
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.212802884 +0000 UTC m=+0.041988732 container create 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:17:41 np0005589310 systemd[1]: Started libpod-conmon-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope.
Jan 20 14:17:41 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:41 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.194888826 +0000 UTC m=+0.024074704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.309172188 +0000 UTC m=+0.138358036 container init 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.316537042 +0000 UTC m=+0.145723020 container start 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.320403649 +0000 UTC m=+0.149589517 container attach 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]: {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    "0": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "devices": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "/dev/loop3"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            ],
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_name": "ceph_lv0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_size": "21470642176",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "name": "ceph_lv0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "tags": {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_name": "ceph",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.crush_device_class": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.encrypted": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.objectstore": "bluestore",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_id": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.vdo": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.with_tpm": "0"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            },
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "vg_name": "ceph_vg0"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        }
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    ],
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    "1": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "devices": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "/dev/loop4"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            ],
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_name": "ceph_lv1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_size": "21470642176",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "name": "ceph_lv1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "tags": {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_name": "ceph",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.crush_device_class": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.encrypted": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.objectstore": "bluestore",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_id": "1",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.vdo": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.with_tpm": "0"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            },
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "vg_name": "ceph_vg1"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        }
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    ],
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    "2": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "devices": [
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "/dev/loop5"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            ],
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_name": "ceph_lv2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_size": "21470642176",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "name": "ceph_lv2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "tags": {
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.cluster_name": "ceph",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.crush_device_class": "",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.encrypted": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.objectstore": "bluestore",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osd_id": "2",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.vdo": "0",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:                "ceph.with_tpm": "0"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            },
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "type": "block",
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:            "vg_name": "ceph_vg2"
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:        }
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]:    ]
Jan 20 14:17:41 np0005589310 reverent_dirac[215435]: }
Jan 20 14:17:41 np0005589310 systemd[1]: libpod-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope: Deactivated successfully.
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.606974805 +0000 UTC m=+0.436160673 container died 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:17:41 np0005589310 systemd[1]: var-lib-containers-storage-overlay-62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f-merged.mount: Deactivated successfully.
Jan 20 14:17:41 np0005589310 podman[215418]: 2026-01-20 19:17:41.649014258 +0000 UTC m=+0.478200106 container remove 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:17:41 np0005589310 systemd[1]: libpod-conmon-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope: Deactivated successfully.
Jan 20 14:17:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.093550199 +0000 UTC m=+0.040306340 container create 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:17:42 np0005589310 systemd[1]: Started libpod-conmon-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope.
Jan 20 14:17:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.077705452 +0000 UTC m=+0.024461613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.17584271 +0000 UTC m=+0.122598931 container init 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.182108707 +0000 UTC m=+0.128864848 container start 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.186066945 +0000 UTC m=+0.132823106 container attach 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:42 np0005589310 angry_banzai[215535]: 167 167
Jan 20 14:17:42 np0005589310 systemd[1]: libpod-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope: Deactivated successfully.
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.188308332 +0000 UTC m=+0.135064473 container died 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:17:42 np0005589310 systemd[1]: var-lib-containers-storage-overlay-779b6b6ce031988af72dd0ff37c3d9dd57d569af15e5f7435c85a1fcf683120e-merged.mount: Deactivated successfully.
Jan 20 14:17:42 np0005589310 podman[215519]: 2026-01-20 19:17:42.224244752 +0000 UTC m=+0.171000893 container remove 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:17:42 np0005589310 systemd[1]: libpod-conmon-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope: Deactivated successfully.
Jan 20 14:17:42 np0005589310 podman[215558]: 2026-01-20 19:17:42.455538524 +0000 UTC m=+0.103662007 container create ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:17:42 np0005589310 podman[215558]: 2026-01-20 19:17:42.373741315 +0000 UTC m=+0.021864808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:17:42 np0005589310 systemd[1]: Started libpod-conmon-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope.
Jan 20 14:17:42 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:17:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:42 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:17:42 np0005589310 podman[215558]: 2026-01-20 19:17:42.527231239 +0000 UTC m=+0.175354742 container init ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:17:42 np0005589310 podman[215558]: 2026-01-20 19:17:42.535277751 +0000 UTC m=+0.183401234 container start ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:42 np0005589310 podman[215558]: 2026-01-20 19:17:42.539196369 +0000 UTC m=+0.187319852 container attach ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:17:43 np0005589310 lvm[215654]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:17:43 np0005589310 lvm[215654]: VG ceph_vg1 finished
Jan 20 14:17:43 np0005589310 lvm[215653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:17:43 np0005589310 lvm[215653]: VG ceph_vg0 finished
Jan 20 14:17:43 np0005589310 lvm[215656]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:17:43 np0005589310 lvm[215656]: VG ceph_vg2 finished
Jan 20 14:17:43 np0005589310 angry_elgamal[215575]: {}
Jan 20 14:17:43 np0005589310 systemd[1]: libpod-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Deactivated successfully.
Jan 20 14:17:43 np0005589310 systemd[1]: libpod-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Consumed 1.317s CPU time.
Jan 20 14:17:43 np0005589310 podman[215558]: 2026-01-20 19:17:43.331712484 +0000 UTC m=+0.979835967 container died ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:17:43 np0005589310 systemd[1]: var-lib-containers-storage-overlay-eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f-merged.mount: Deactivated successfully.
Jan 20 14:17:43 np0005589310 podman[215558]: 2026-01-20 19:17:43.397719517 +0000 UTC m=+1.045843040 container remove ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:17:43 np0005589310 systemd[1]: libpod-conmon-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Deactivated successfully.
Jan 20 14:17:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:17:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:17:43 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:44 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:17:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:17:44 np0005589310 python3.9[215849]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:45 np0005589310 python3.9[216001]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:46 np0005589310 python3.9[216154]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:17:46 np0005589310 python3.9[216306]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:17:47 np0005589310 python3.9[216459]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:17:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:48 np0005589310 python3.9[216582]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936666.852145-90-16826186677517/.source.iscsi _original_basename=.u97w0j4o follow=False checksum=9c63e5636d3dd22e5337afde50d813d21294a1dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:48 np0005589310 python3.9[216734]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:49 np0005589310 python3.9[216886]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:17:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:50 np0005589310 python3.9[217038]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:17:50 np0005589310 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 20 14:17:51 np0005589310 python3.9[217194]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:17:51 np0005589310 systemd[1]: Reloading.
Jan 20 14:17:51 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:51 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:52 np0005589310 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:17:52 np0005589310 systemd[1]: Starting Open-iSCSI...
Jan 20 14:17:52 np0005589310 kernel: Loading iSCSI transport class v2.0-870.
Jan 20 14:17:52 np0005589310 systemd[1]: Started Open-iSCSI.
Jan 20 14:17:52 np0005589310 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 20 14:17:52 np0005589310 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 20 14:17:53 np0005589310 python3.9[217392]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:17:53 np0005589310 network[217409]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:17:53 np0005589310 network[217410]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:17:53 np0005589310 network[217411]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:17:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:57 np0005589310 python3.9[217683]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:17:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:17:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:17:59 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:17:59 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:17:59 np0005589310 systemd[1]: Reloading.
Jan 20 14:17:59 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:17:59 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:17:59 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:17:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:00 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:18:00 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:18:00 np0005589310 systemd[1]: run-re6527665684b4febacf460e10aa073fe.service: Deactivated successfully.
Jan 20 14:18:00 np0005589310 python3.9[217998]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:18:01 np0005589310 python3.9[218150]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 20 14:18:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:02 np0005589310 python3.9[218306]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:18:02 np0005589310 python3.9[218429]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936681.9612522-178-250805566780138/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:03 np0005589310 python3.9[218581]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:04 np0005589310 python3.9[218733]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:04 np0005589310 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:18:04 np0005589310 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:18:04 np0005589310 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:18:04 np0005589310 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:18:04 np0005589310 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:18:05 np0005589310 python3.9[218889]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.443 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:18:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:18:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:18:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:05 np0005589310 podman[219014]: 2026-01-20 19:18:05.93343022 +0000 UTC m=+0.086465436 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:06 np0005589310 python3.9[219059]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:18:06 np0005589310 python3.9[219218]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:18:07 np0005589310 python3.9[219341]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936686.3102148-229-195458465676049/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:07 np0005589310 podman[219465]: 2026-01-20 19:18:07.652116208 +0000 UTC m=+0.051791548 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 14:18:07 np0005589310 python3.9[219512]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:08 np0005589310 python3.9[219665]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:09 np0005589310 python3.9[219817]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:09 np0005589310 python3.9[219969]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:10 np0005589310 python3.9[220121]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:10 np0005589310 python3.9[220273]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:11 np0005589310 python3.9[220425]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:12 np0005589310 python3.9[220577]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:12 np0005589310 python3.9[220729]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:18:13 np0005589310 python3.9[220883]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:14 np0005589310 python3.9[221036]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:14 np0005589310 systemd[1]: Listening on multipathd control socket.
Jan 20 14:18:14 np0005589310 python3.9[221192]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:14 np0005589310 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 20 14:18:14 np0005589310 udevadm[221197]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 20 14:18:14 np0005589310 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 20 14:18:14 np0005589310 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:18:15 np0005589310 multipathd[221200]: --------start up--------
Jan 20 14:18:15 np0005589310 multipathd[221200]: read /etc/multipath.conf
Jan 20 14:18:15 np0005589310 multipathd[221200]: path checkers start up
Jan 20 14:18:15 np0005589310 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:18:15 np0005589310 python3.9[221359]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 14:18:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:16 np0005589310 python3.9[221511]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 20 14:18:16 np0005589310 kernel: Key type psk registered
Jan 20 14:18:17 np0005589310 python3.9[221674]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:18:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:18 np0005589310 python3.9[221797]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936697.0473604-359-72893046642656/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:18 np0005589310 python3.9[221949]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:19 np0005589310 python3.9[222101]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:19 np0005589310 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 14:18:19 np0005589310 systemd[1]: Stopped Load Kernel Modules.
Jan 20 14:18:19 np0005589310 systemd[1]: Stopping Load Kernel Modules...
Jan 20 14:18:19 np0005589310 systemd[1]: Starting Load Kernel Modules...
Jan 20 14:18:19 np0005589310 systemd[1]: Finished Load Kernel Modules.
Jan 20 14:18:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:20 np0005589310 python3.9[222257]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 14:18:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:23 np0005589310 systemd[1]: Reloading.
Jan 20 14:18:23 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:18:23 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:18:23 np0005589310 systemd[1]: Reloading.
Jan 20 14:18:23 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:18:23 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:18:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:24 np0005589310 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 14:18:24 np0005589310 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 14:18:24 np0005589310 lvm[222373]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:18:24 np0005589310 lvm[222373]: VG ceph_vg1 finished
Jan 20 14:18:24 np0005589310 lvm[222372]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:18:24 np0005589310 lvm[222372]: VG ceph_vg0 finished
Jan 20 14:18:24 np0005589310 lvm[222376]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:18:24 np0005589310 lvm[222376]: VG ceph_vg2 finished
Jan 20 14:18:24 np0005589310 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 14:18:24 np0005589310 systemd[1]: Starting man-db-cache-update.service...
Jan 20 14:18:24 np0005589310 systemd[1]: Reloading.
Jan 20 14:18:24 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:18:24 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:18:24 np0005589310 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 14:18:25 np0005589310 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 14:18:25 np0005589310 systemd[1]: Finished man-db-cache-update.service.
Jan 20 14:18:25 np0005589310 systemd[1]: man-db-cache-update.service: Consumed 1.440s CPU time.
Jan 20 14:18:25 np0005589310 systemd[1]: run-r065edafabcfc432aa9bcca6f3d25b4af.service: Deactivated successfully.
Jan 20 14:18:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:25 np0005589310 python3.9[223729]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:25 np0005589310 systemd[1]: Stopping Open-iSCSI...
Jan 20 14:18:25 np0005589310 iscsid[217234]: iscsid shutting down.
Jan 20 14:18:25 np0005589310 systemd[1]: iscsid.service: Deactivated successfully.
Jan 20 14:18:25 np0005589310 systemd[1]: Stopped Open-iSCSI.
Jan 20 14:18:25 np0005589310 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 14:18:25 np0005589310 systemd[1]: Starting Open-iSCSI...
Jan 20 14:18:25 np0005589310 systemd[1]: Started Open-iSCSI.
Jan 20 14:18:26 np0005589310 python3.9[223886]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:18:26 np0005589310 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 20 14:18:26 np0005589310 multipathd[221200]: exit (signal)
Jan 20 14:18:26 np0005589310 multipathd[221200]: --------shut down-------
Jan 20 14:18:26 np0005589310 systemd[1]: multipathd.service: Deactivated successfully.
Jan 20 14:18:26 np0005589310 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 20 14:18:26 np0005589310 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 14:18:26 np0005589310 multipathd[223893]: --------start up--------
Jan 20 14:18:26 np0005589310 multipathd[223893]: read /etc/multipath.conf
Jan 20 14:18:26 np0005589310 multipathd[223893]: path checkers start up
Jan 20 14:18:26 np0005589310 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 14:18:27 np0005589310 python3.9[224050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 14:18:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:28 np0005589310 python3.9[224206]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:29 np0005589310 python3.9[224358]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:18:29 np0005589310 systemd[1]: Reloading.
Jan 20 14:18:29 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:18:29 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:18:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:30 np0005589310 python3.9[224542]: ansible-ansible.builtin.service_facts Invoked
Jan 20 14:18:30 np0005589310 network[224559]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 14:18:30 np0005589310 network[224560]: 'network-scripts' will be removed from distribution in near future.
Jan 20 14:18:30 np0005589310 network[224561]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:18:31
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:18:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:33 np0005589310 python3.9[224834]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:18:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:18:34 np0005589310 python3.9[224987]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:35 np0005589310 python3.9[225140]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:36 np0005589310 python3.9[225293]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:36 np0005589310 podman[225295]: 2026-01-20 19:18:36.222616906 +0000 UTC m=+0.094780192 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 20 14:18:36 np0005589310 python3.9[225472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:37 np0005589310 python3.9[225625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:37 np0005589310 podman[225750]: 2026-01-20 19:18:37.942591921 +0000 UTC m=+0.044759097 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 14:18:38 np0005589310 python3.9[225796]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:38 np0005589310 python3.9[225949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:18:39 np0005589310 python3.9[226102]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:40 np0005589310 python3.9[226254]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:40 np0005589310 python3.9[226406]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:41 np0005589310 python3.9[226558]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:41 np0005589310 python3.9[226710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:42 np0005589310 python3.9[226862]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:43 np0005589310 python3.9[227014]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:43 np0005589310 python3.9[227166]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:18:44 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:18:44 np0005589310 python3.9[227400]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:18:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.69559193 +0000 UTC m=+0.072459670 container create e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.648907346 +0000 UTC m=+0.025775116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:44 np0005589310 systemd[1]: Started libpod-conmon-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope.
Jan 20 14:18:44 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.79331389 +0000 UTC m=+0.170181630 container init e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.802530644 +0000 UTC m=+0.179398394 container start e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.805938597 +0000 UTC m=+0.182806337 container attach e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 14:18:44 np0005589310 romantic_bell[227603]: 167 167
Jan 20 14:18:44 np0005589310 systemd[1]: libpod-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope: Deactivated successfully.
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.808283394 +0000 UTC m=+0.185151134 container died e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 14:18:44 np0005589310 systemd[1]: var-lib-containers-storage-overlay-feba32426eebee1680dcc8256bdd3c8a08980552317a8a774c8385934d18c6b4-merged.mount: Deactivated successfully.
Jan 20 14:18:44 np0005589310 podman[227546]: 2026-01-20 19:18:44.84479872 +0000 UTC m=+0.221666460 container remove e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:18:44 np0005589310 systemd[1]: libpod-conmon-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope: Deactivated successfully.
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.002414066 +0000 UTC m=+0.041652532 container create 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:18:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:18:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:45 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:18:45 np0005589310 python3.9[227637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:45 np0005589310 systemd[1]: Started libpod-conmon-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope.
Jan 20 14:18:45 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:45 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:44.98446534 +0000 UTC m=+0.023703836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.128421794 +0000 UTC m=+0.167660290 container init 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.135721111 +0000 UTC m=+0.174959577 container start 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.143465628 +0000 UTC m=+0.182704114 container attach 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 14:18:45 np0005589310 vigorous_williams[227672]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:18:45 np0005589310 vigorous_williams[227672]: --> All data devices are unavailable
Jan 20 14:18:45 np0005589310 systemd[1]: libpod-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope: Deactivated successfully.
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.58888927 +0000 UTC m=+0.628127726 container died 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:18:45 np0005589310 python3.9[227835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:45 np0005589310 systemd[1]: var-lib-containers-storage-overlay-60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04-merged.mount: Deactivated successfully.
Jan 20 14:18:45 np0005589310 podman[227655]: 2026-01-20 19:18:45.637742035 +0000 UTC m=+0.676980501 container remove 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:18:45 np0005589310 systemd[1]: libpod-conmon-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope: Deactivated successfully.
Jan 20 14:18:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.076535755 +0000 UTC m=+0.035486902 container create 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:46 np0005589310 systemd[1]: Started libpod-conmon-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope.
Jan 20 14:18:46 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.155309597 +0000 UTC m=+0.114260764 container init 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.060587958 +0000 UTC m=+0.019539135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.16325933 +0000 UTC m=+0.122210477 container start 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:18:46 np0005589310 serene_cannon[228085]: 167 167
Jan 20 14:18:46 np0005589310 systemd[1]: libpod-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope: Deactivated successfully.
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.166658892 +0000 UTC m=+0.125610069 container attach 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.168503528 +0000 UTC m=+0.127454675 container died 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:46 np0005589310 systemd[1]: var-lib-containers-storage-overlay-481fee7a42902d0c1d32df4d099d33c889aee2c447986db47bbb401bb9e901a6-merged.mount: Deactivated successfully.
Jan 20 14:18:46 np0005589310 podman[228069]: 2026-01-20 19:18:46.205085945 +0000 UTC m=+0.164037092 container remove 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:18:46 np0005589310 systemd[1]: libpod-conmon-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope: Deactivated successfully.
Jan 20 14:18:46 np0005589310 python3.9[228068]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.364059193 +0000 UTC m=+0.040667568 container create 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:18:46 np0005589310 systemd[1]: Started libpod-conmon-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope.
Jan 20 14:18:46 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.347683216 +0000 UTC m=+0.024291621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.455179695 +0000 UTC m=+0.131788090 container init 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.463378274 +0000 UTC m=+0.139986649 container start 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.466798287 +0000 UTC m=+0.143406662 container attach 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]: {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    "0": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "devices": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "/dev/loop3"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            ],
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_name": "ceph_lv0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_size": "21470642176",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "name": "ceph_lv0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "tags": {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_name": "ceph",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.crush_device_class": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.encrypted": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.objectstore": "bluestore",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_id": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.vdo": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.with_tpm": "0"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            },
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "vg_name": "ceph_vg0"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        }
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    ],
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    "1": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "devices": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "/dev/loop4"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            ],
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_name": "ceph_lv1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_size": "21470642176",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "name": "ceph_lv1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "tags": {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_name": "ceph",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.crush_device_class": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.encrypted": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.objectstore": "bluestore",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_id": "1",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.vdo": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.with_tpm": "0"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            },
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "vg_name": "ceph_vg1"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        }
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    ],
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    "2": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "devices": [
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "/dev/loop5"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            ],
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_name": "ceph_lv2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_size": "21470642176",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "name": "ceph_lv2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "tags": {
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.cluster_name": "ceph",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.crush_device_class": "",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.encrypted": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.objectstore": "bluestore",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osd_id": "2",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.vdo": "0",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:                "ceph.with_tpm": "0"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            },
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "type": "block",
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:            "vg_name": "ceph_vg2"
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:        }
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]:    ]
Jan 20 14:18:46 np0005589310 upbeat_rosalind[228194]: }
Jan 20 14:18:46 np0005589310 systemd[1]: libpod-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope: Deactivated successfully.
Jan 20 14:18:46 np0005589310 conmon[228194]: conmon 8c4d18cbc2f1609c83e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope/container/memory.events
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.756400615 +0000 UTC m=+0.433008990 container died 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:18:46 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985-merged.mount: Deactivated successfully.
Jan 20 14:18:46 np0005589310 python3.9[228282]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:46 np0005589310 podman[228133]: 2026-01-20 19:18:46.807712561 +0000 UTC m=+0.484320936 container remove 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 14:18:46 np0005589310 systemd[1]: libpod-conmon-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope: Deactivated successfully.
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.234400857 +0000 UTC m=+0.039387786 container create 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:18:47 np0005589310 systemd[1]: Started libpod-conmon-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope.
Jan 20 14:18:47 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.216237487 +0000 UTC m=+0.021224436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.315495086 +0000 UTC m=+0.120482035 container init 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.322004513 +0000 UTC m=+0.126991442 container start 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.325466098 +0000 UTC m=+0.130453057 container attach 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 14:18:47 np0005589310 stoic_bassi[228529]: 167 167
Jan 20 14:18:47 np0005589310 systemd[1]: libpod-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope: Deactivated successfully.
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.327520487 +0000 UTC m=+0.132507416 container died 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:18:47 np0005589310 systemd[1]: var-lib-containers-storage-overlay-9488fc802a30c588fff5d29435ea64224755299514897f22415b1dfb5fa6e020-merged.mount: Deactivated successfully.
Jan 20 14:18:47 np0005589310 podman[228487]: 2026-01-20 19:18:47.364105785 +0000 UTC m=+0.169092714 container remove 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:18:47 np0005589310 systemd[1]: libpod-conmon-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope: Deactivated successfully.
Jan 20 14:18:47 np0005589310 python3.9[228525]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:47 np0005589310 podman[228555]: 2026-01-20 19:18:47.540505346 +0000 UTC m=+0.048843915 container create 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:18:47 np0005589310 systemd[1]: Started libpod-conmon-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope.
Jan 20 14:18:47 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:18:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:47 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:18:47 np0005589310 podman[228555]: 2026-01-20 19:18:47.607813971 +0000 UTC m=+0.116152620 container init 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:18:47 np0005589310 podman[228555]: 2026-01-20 19:18:47.519185329 +0000 UTC m=+0.027523918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:18:47 np0005589310 podman[228555]: 2026-01-20 19:18:47.619657778 +0000 UTC m=+0.127996337 container start 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:18:47 np0005589310 podman[228555]: 2026-01-20 19:18:47.624807893 +0000 UTC m=+0.133146482 container attach 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:18:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:48 np0005589310 python3.9[228737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:48 np0005589310 lvm[228879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:18:48 np0005589310 lvm[228883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:18:48 np0005589310 lvm[228879]: VG ceph_vg0 finished
Jan 20 14:18:48 np0005589310 lvm[228883]: VG ceph_vg1 finished
Jan 20 14:18:48 np0005589310 lvm[228901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:18:48 np0005589310 lvm[228901]: VG ceph_vg2 finished
Jan 20 14:18:48 np0005589310 magical_driscoll[228595]: {}
Jan 20 14:18:48 np0005589310 systemd[1]: libpod-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Deactivated successfully.
Jan 20 14:18:48 np0005589310 podman[228555]: 2026-01-20 19:18:48.473759777 +0000 UTC m=+0.982098356 container died 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:18:48 np0005589310 systemd[1]: libpod-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Consumed 1.382s CPU time.
Jan 20 14:18:48 np0005589310 systemd[1]: var-lib-containers-storage-overlay-75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6-merged.mount: Deactivated successfully.
Jan 20 14:18:48 np0005589310 podman[228555]: 2026-01-20 19:18:48.523494975 +0000 UTC m=+1.031833544 container remove 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:18:48 np0005589310 systemd[1]: libpod-conmon-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Deactivated successfully.
Jan 20 14:18:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:18:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:18:48 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:48 np0005589310 python3.9[228969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:18:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:49 np0005589310 python3.9[229146]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:18:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:50 np0005589310 python3.9[229298]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 14:18:50 np0005589310 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 20 14:18:50 np0005589310 python3.9[229451]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:18:50 np0005589310 systemd[1]: Reloading.
Jan 20 14:18:51 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:18:51 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:18:51 np0005589310 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 14:18:51 np0005589310 python3.9[229639]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:52 np0005589310 python3.9[229792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:53 np0005589310 python3.9[229945]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:53 np0005589310 python3.9[230098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:54 np0005589310 python3.9[230251]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:54 np0005589310 python3.9[230404]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:55 np0005589310 python3.9[230557]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:55 np0005589310 python3.9[230710]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 14:18:57 np0005589310 python3.9[230863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:18:57 np0005589310 python3.9[231015]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:18:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:18:58 np0005589310 python3.9[231167]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:18:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:18:58 np0005589310 python3.9[231319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:18:59 np0005589310 python3.9[231471]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:18:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:00 np0005589310 python3.9[231623]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:00 np0005589310 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 14:19:00 np0005589310 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 20 14:19:00 np0005589310 python3.9[231777]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:01 np0005589310 python3.9[231929]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:01 np0005589310 python3.9[232081]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:02 np0005589310 python3.9[232233]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:19:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:19:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:19:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:06 np0005589310 podman[232258]: 2026-01-20 19:19:06.438112224 +0000 UTC m=+0.111125527 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 14:19:07 np0005589310 python3.9[232412]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 20 14:19:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:08 np0005589310 podman[232537]: 2026-01-20 19:19:08.330382571 +0000 UTC m=+0.083652972 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:19:08 np0005589310 python3.9[232584]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 14:19:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:09 np0005589310 python3.9[232743]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 14:19:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:10 np0005589310 systemd-logind[797]: New session 51 of user zuul.
Jan 20 14:19:10 np0005589310 systemd[1]: Started Session 51 of User zuul.
Jan 20 14:19:10 np0005589310 systemd[1]: session-51.scope: Deactivated successfully.
Jan 20 14:19:10 np0005589310 systemd-logind[797]: Session 51 logged out. Waiting for processes to exit.
Jan 20 14:19:10 np0005589310 systemd-logind[797]: Removed session 51.
Jan 20 14:19:11 np0005589310 python3.9[232929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:11 np0005589310 python3.9[233050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936750.8253129-986-121421234066519/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:12 np0005589310 python3.9[233200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:12 np0005589310 python3.9[233276]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:13 np0005589310 python3.9[233426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:13 np0005589310 python3.9[233547]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936752.8455598-986-133233329163324/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:14 np0005589310 python3.9[233697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:14 np0005589310 python3.9[233818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936753.836776-986-50083820915030/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:15 np0005589310 python3.9[233968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:15 np0005589310 python3.9[234089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936754.897706-986-118519131702604/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:16 np0005589310 python3.9[234239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:16 np0005589310 python3.9[234360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936755.8897028-986-225756935706964/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:17 np0005589310 python3.9[234512]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:19:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:18 np0005589310 python3.9[234664]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:19:18 np0005589310 python3.9[234816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:19 np0005589310 python3.9[234968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:19 np0005589310 python3.9[235091]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768936758.786734-1093-179629866760944/.source _original_basename=.kok20ry_ follow=False checksum=73adea5ed8aa8586a32b875bac50fd818bde17fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 20 14:19:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:20 np0005589310 python3.9[235243]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:21 np0005589310 python3.9[235395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:21 np0005589310 python3.9[235516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936760.6883092-1119-132680697012746/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:22 np0005589310 python3.9[235666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 14:19:22 np0005589310 python3.9[235787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936761.801784-1134-13881176490092/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 14:19:23 np0005589310 python3.9[235939]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 20 14:19:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:24 np0005589310 python3.9[236091]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:19:25 np0005589310 python3[236243]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:19:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:19:31
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'volumes', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:19:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:19:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:19:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 14:19:36 np0005589310 podman[236256]: 2026-01-20 19:19:36.617035997 +0000 UTC m=+10.696701287 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:19:36 np0005589310 podman[236339]: 2026-01-20 19:19:36.766674848 +0000 UTC m=+0.066635298 container create d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 20 14:19:36 np0005589310 podman[236339]: 2026-01-20 19:19:36.724072595 +0000 UTC m=+0.024033065 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:19:36 np0005589310 python3[236243]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 20 14:19:37 np0005589310 podman[236501]: 2026-01-20 19:19:37.342424883 +0000 UTC m=+0.080881094 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:19:37 np0005589310 python3.9[236546]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:38 np0005589310 python3.9[236709]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 20 14:19:38 np0005589310 podman[236833]: 2026-01-20 19:19:38.902816684 +0000 UTC m=+0.054030672 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 14:19:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:39 np0005589310 python3.9[236880]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 14:19:39 np0005589310 python3[237032]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 14:19:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:40 np0005589310 podman[237067]: 2026-01-20 19:19:40.070150677 +0000 UTC m=+0.047038203 container create 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:19:40 np0005589310 podman[237067]: 2026-01-20 19:19:40.043284865 +0000 UTC m=+0.020172421 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 14:19:40 np0005589310 python3[237032]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 20 14:19:40 np0005589310 python3.9[237257]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:43 np0005589310 python3.9[237411]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.919042) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783919104, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1867, "num_deletes": 250, "total_data_size": 3154602, "memory_usage": 3194392, "flush_reason": "Manual Compaction"}
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783933985, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1773559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11778, "largest_seqno": 13644, "table_properties": {"data_size": 1767537, "index_size": 3033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15072, "raw_average_key_size": 20, "raw_value_size": 1754200, "raw_average_value_size": 2338, "num_data_blocks": 140, "num_entries": 750, "num_filter_entries": 750, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936570, "oldest_key_time": 1768936570, "file_creation_time": 1768936783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15010 microseconds, and 5529 cpu microseconds.
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.934058) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1773559 bytes OK
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.934078) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935696) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935710) EVENT_LOG_v1 {"time_micros": 1768936783935707, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3146755, prev total WAL file size 3146755, number of live WAL files 2.
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.936529) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1731KB)], [29(7980KB)]
Jan 20 14:19:43 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783936610, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9945303, "oldest_snapshot_seqno": -1}
Jan 20 14:19:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4061 keys, 7935047 bytes, temperature: kUnknown
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784030587, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7935047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7906056, "index_size": 17745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96447, "raw_average_key_size": 23, "raw_value_size": 7831047, "raw_average_value_size": 1928, "num_data_blocks": 771, "num_entries": 4061, "num_filter_entries": 4061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.030785) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7935047 bytes
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.032012) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.8 rd, 84.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 4473, records dropped: 412 output_compression: NoCompression
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.032028) EVENT_LOG_v1 {"time_micros": 1768936784032020, "job": 12, "event": "compaction_finished", "compaction_time_micros": 94031, "compaction_time_cpu_micros": 18148, "output_level": 6, "num_output_files": 1, "total_output_size": 7935047, "num_input_records": 4473, "num_output_records": 4061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784032492, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784033878, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.936433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:19:44 np0005589310 python3.9[237562]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936783.7317803-1230-52676674584819/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:19:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:19:44 np0005589310 python3.9[237638]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 14:19:44 np0005589310 systemd[1]: Reloading.
Jan 20 14:19:44 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:19:44 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:19:45 np0005589310 python3.9[237749]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 14:19:45 np0005589310 systemd[1]: Reloading.
Jan 20 14:19:45 np0005589310 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 14:19:45 np0005589310 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 14:19:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:46 np0005589310 systemd[1]: Starting nova_compute container...
Jan 20 14:19:46 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:46 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:46 np0005589310 podman[237790]: 2026-01-20 19:19:46.325733964 +0000 UTC m=+0.098287847 container init 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 20 14:19:46 np0005589310 podman[237790]: 2026-01-20 19:19:46.331411872 +0000 UTC m=+0.103965735 container start 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:19:46 np0005589310 podman[237790]: nova_compute
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + sudo -E kolla_set_configs
Jan 20 14:19:46 np0005589310 systemd[1]: Started nova_compute container.
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Validating config file
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying service configuration files
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Writing out command to execute
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:46 np0005589310 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:19:46 np0005589310 nova_compute[237805]: ++ cat /run_command
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + CMD=nova-compute
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + ARGS=
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + sudo kolla_copy_cacerts
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + [[ ! -n '' ]]
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + . kolla_extend_start
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:19:46 np0005589310 nova_compute[237805]: Running command: 'nova-compute'
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + umask 0022
Jan 20 14:19:46 np0005589310 nova_compute[237805]: + exec nova-compute
Jan 20 14:19:47 np0005589310 python3.9[237966]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:47 np0005589310 python3.9[238117]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 20 14:19:48 np0005589310 python3.9[238267]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.664 237809 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.680 237809 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:19:48 np0005589310 nova_compute[237805]: 2026-01-20 19:19:48.680 237809 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 20 14:19:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.249 237809 INFO nova.virt.driver [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.380 237809 INFO nova.compute.provider_config [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:49 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.398 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.453 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 WARNING oslo_config.cfg [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:19:49 np0005589310 nova_compute[237805]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:19:49 np0005589310 nova_compute[237805]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:19:49 np0005589310 nova_compute[237805]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:19:49 np0005589310 nova_compute[237805]: ).  Its value may be silently ignored in the future.#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_secret_uuid        = 90fff835-31df-513f-a409-b6642f04e6ac log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 python3.9[238492]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.563 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.564 237809 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.579 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.579 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.580 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.580 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 20 14:19:49 np0005589310 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 14:19:49 np0005589310 systemd[1]: Started libvirt QEMU daemon.
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.641 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fddfe88f460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.644 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fddfe88f460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.646 237809 INFO nova.virt.libvirt.driver [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.659 237809 WARNING nova.virt.libvirt.driver [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 20 14:19:49 np0005589310 nova_compute[237805]: 2026-01-20 19:19:49.660 237809 DEBUG nova.virt.libvirt.volume.mount [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.748915488 +0000 UTC m=+0.050648732 container create 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 14:19:49 np0005589310 systemd[1]: Started libpod-conmon-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope.
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.722949507 +0000 UTC m=+0.024682781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:49 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.82981177 +0000 UTC m=+0.131545044 container init 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.835556409 +0000 UTC m=+0.137289653 container start 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.83841337 +0000 UTC m=+0.140146634 container attach 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 20 14:19:49 np0005589310 recursing_wilbur[238702]: 167 167
Jan 20 14:19:49 np0005589310 systemd[1]: libpod-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope: Deactivated successfully.
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.840819177 +0000 UTC m=+0.142552431 container died 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:19:49 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f350fc7b68c25a168bcd89e5f7802c3f0b110d84a5f1f8b2ab81bf8edc34d1d2-merged.mount: Deactivated successfully.
Jan 20 14:19:49 np0005589310 podman[238655]: 2026-01-20 19:19:49.877130389 +0000 UTC m=+0.178863633 container remove 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:49 np0005589310 systemd[1]: libpod-conmon-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope: Deactivated successfully.
Jan 20 14:19:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.040023742 +0000 UTC m=+0.048555589 container create 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:19:50 np0005589310 systemd[1]: Started libpod-conmon-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope.
Jan 20 14:19:50 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:50 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.022103758 +0000 UTC m=+0.030635635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.134103826 +0000 UTC m=+0.142635673 container init 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.142632763 +0000 UTC m=+0.151164610 container start 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.147199524 +0000 UTC m=+0.155731411 container attach 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:19:50 np0005589310 python3.9[238843]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 14:19:50 np0005589310 systemd[1]: Stopping nova_compute container...
Jan 20 14:19:50 np0005589310 nova_compute[237805]: 2026-01-20 19:19:50.448 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 20 14:19:50 np0005589310 nova_compute[237805]: 2026-01-20 19:19:50.449 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 20 14:19:50 np0005589310 nova_compute[237805]: 2026-01-20 19:19:50.450 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 20 14:19:50 np0005589310 focused_hofstadter[238846]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:19:50 np0005589310 focused_hofstadter[238846]: --> All data devices are unavailable
Jan 20 14:19:50 np0005589310 systemd[1]: libpod-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope: Deactivated successfully.
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.668146997 +0000 UTC m=+0.676678874 container died 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 14:19:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85-merged.mount: Deactivated successfully.
Jan 20 14:19:50 np0005589310 podman[238801]: 2026-01-20 19:19:50.709690406 +0000 UTC m=+0.718222253 container remove 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:50 np0005589310 systemd[1]: libpod-conmon-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope: Deactivated successfully.
Jan 20 14:19:50 np0005589310 virtqemud[238596]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 20 14:19:50 np0005589310 systemd[1]: libpod-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0.scope: Deactivated successfully.
Jan 20 14:19:50 np0005589310 virtqemud[238596]: hostname: compute-0
Jan 20 14:19:50 np0005589310 virtqemud[238596]: End of file while reading data: Input/output error
Jan 20 14:19:50 np0005589310 systemd[1]: libpod-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0.scope: Consumed 3.083s CPU time.
Jan 20 14:19:50 np0005589310 podman[238864]: 2026-01-20 19:19:50.930139966 +0000 UTC m=+0.517649744 container died 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, config_id=edpm, io.buildah.version=1.41.3)
Jan 20 14:19:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0-userdata-shm.mount: Deactivated successfully.
Jan 20 14:19:50 np0005589310 systemd[1]: var-lib-containers-storage-overlay-cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec-merged.mount: Deactivated successfully.
Jan 20 14:19:51 np0005589310 podman[238864]: 2026-01-20 19:19:51.947740254 +0000 UTC m=+1.535250032 container cleanup 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:19:51 np0005589310 podman[238864]: nova_compute
Jan 20 14:19:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.038736833 +0000 UTC m=+0.071943247 container create a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:19:52 np0005589310 podman[238981]: nova_compute
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:51.989347105 +0000 UTC m=+0.022553539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:52 np0005589310 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 20 14:19:52 np0005589310 systemd[1]: Stopped nova_compute container.
Jan 20 14:19:52 np0005589310 systemd[1]: Started libpod-conmon-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope.
Jan 20 14:19:52 np0005589310 systemd[1]: Starting nova_compute container...
Jan 20 14:19:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.139272713 +0000 UTC m=+0.172479207 container init a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:52 np0005589310 friendly_antonelli[239007]: 167 167
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.153739674 +0000 UTC m=+0.186946098 container start a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:52 np0005589310 systemd[1]: libpod-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope: Deactivated successfully.
Jan 20 14:19:52 np0005589310 conmon[239007]: conmon a9de9ad3b6aa1ba11698 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope/container/memory.events
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.158687825 +0000 UTC m=+0.191894319 container attach a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.159110285 +0000 UTC m=+0.192316719 container died a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:19:52 np0005589310 systemd[1]: var-lib-containers-storage-overlay-94e69433b160e0ff16a18427dc4f2cb64d7db8b5fc3524962681a32ece01a63d-merged.mount: Deactivated successfully.
Jan 20 14:19:52 np0005589310 podman[238979]: 2026-01-20 19:19:52.217517962 +0000 UTC m=+0.250724416 container remove a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:19:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 systemd[1]: libpod-conmon-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope: Deactivated successfully.
Jan 20 14:19:52 np0005589310 podman[239008]: 2026-01-20 19:19:52.241823932 +0000 UTC m=+0.120083946 container init 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:19:52 np0005589310 podman[239008]: 2026-01-20 19:19:52.24752077 +0000 UTC m=+0.125780764 container start 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:52 np0005589310 podman[239008]: nova_compute
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + sudo -E kolla_set_configs
Jan 20 14:19:52 np0005589310 systemd[1]: Started nova_compute container.
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Validating config file
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying service configuration files
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /etc/ceph
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Creating directory /etc/ceph
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Writing out command to execute
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:52 np0005589310 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 14:19:52 np0005589310 nova_compute[239038]: ++ cat /run_command
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + CMD=nova-compute
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + ARGS=
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + sudo kolla_copy_cacerts
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + [[ ! -n '' ]]
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + . kolla_extend_start
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 14:19:52 np0005589310 nova_compute[239038]: Running command: 'nova-compute'
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + umask 0022
Jan 20 14:19:52 np0005589310 nova_compute[239038]: + exec nova-compute
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.385101779 +0000 UTC m=+0.044256394 container create 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:52 np0005589310 systemd[1]: Started libpod-conmon-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope.
Jan 20 14:19:52 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.367505953 +0000 UTC m=+0.026660368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.476019006 +0000 UTC m=+0.135173441 container init 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.486607613 +0000 UTC m=+0.145762018 container start 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.491393179 +0000 UTC m=+0.150547604 container attach 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]: {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    "0": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "devices": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "/dev/loop3"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            ],
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_name": "ceph_lv0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_size": "21470642176",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "name": "ceph_lv0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "tags": {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_name": "ceph",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.crush_device_class": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.encrypted": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.objectstore": "bluestore",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_id": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.vdo": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.with_tpm": "0"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            },
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "vg_name": "ceph_vg0"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        }
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    ],
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    "1": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "devices": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "/dev/loop4"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            ],
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_name": "ceph_lv1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_size": "21470642176",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "name": "ceph_lv1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "tags": {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_name": "ceph",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.crush_device_class": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.encrypted": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.objectstore": "bluestore",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_id": "1",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.vdo": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.with_tpm": "0"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            },
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "vg_name": "ceph_vg1"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        }
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    ],
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    "2": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "devices": [
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "/dev/loop5"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            ],
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_name": "ceph_lv2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_size": "21470642176",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "name": "ceph_lv2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "tags": {
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.cluster_name": "ceph",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.crush_device_class": "",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.encrypted": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.objectstore": "bluestore",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osd_id": "2",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.vdo": "0",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:                "ceph.with_tpm": "0"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            },
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "type": "block",
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:            "vg_name": "ceph_vg2"
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:        }
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]:    ]
Jan 20 14:19:52 np0005589310 serene_wozniak[239098]: }
Jan 20 14:19:52 np0005589310 systemd[1]: libpod-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope: Deactivated successfully.
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.812057112 +0000 UTC m=+0.471211517 container died 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:19:52 np0005589310 podman[239062]: 2026-01-20 19:19:52.860810765 +0000 UTC m=+0.519965170 container remove 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:52 np0005589310 systemd[1]: libpod-conmon-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5-merged.mount: Deactivated successfully.
Jan 20 14:19:53 np0005589310 python3.9[239235]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 14:19:53 np0005589310 systemd[1]: Started libpod-conmon-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope.
Jan 20 14:19:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 podman[239320]: 2026-01-20 19:19:53.291294584 +0000 UTC m=+0.111331894 container init d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:19:53 np0005589310 podman[239320]: 2026-01-20 19:19:53.299667867 +0000 UTC m=+0.119705157 container start d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3)
Jan 20 14:19:53 np0005589310 python3.9[239235]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Applying nova statedir ownership
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 20 14:19:53 np0005589310 nova_compute_init[239358]: INFO:nova_statedir:Nova statedir ownership complete
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.354785904 +0000 UTC m=+0.057270731 container create 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 14:19:53 np0005589310 systemd[1]: libpod-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 podman[239365]: 2026-01-20 19:19:53.372760371 +0000 UTC m=+0.031425564 container died d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 14:19:53 np0005589310 systemd[1]: Started libpod-conmon-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope.
Jan 20 14:19:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.419825762 +0000 UTC m=+0.122310609 container init 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.426338121 +0000 UTC m=+0.128822948 container start 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.334291557 +0000 UTC m=+0.036776404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:53 np0005589310 eager_bhaskara[239384]: 167 167
Jan 20 14:19:53 np0005589310 systemd[1]: libpod-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb-userdata-shm.mount: Deactivated successfully.
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.452287431 +0000 UTC m=+0.154772278 container attach 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.452825174 +0000 UTC m=+0.155310001 container died 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:19:53 np0005589310 podman[239376]: 2026-01-20 19:19:53.466125316 +0000 UTC m=+0.086496459 container cleanup d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 14:19:53 np0005589310 systemd[1]: libpod-conmon-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 podman[239348]: 2026-01-20 19:19:53.490719354 +0000 UTC m=+0.193204181 container remove 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:19:53 np0005589310 systemd[1]: libpod-conmon-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 podman[239449]: 2026-01-20 19:19:53.664615374 +0000 UTC m=+0.051409908 container create 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 14:19:53 np0005589310 podman[239449]: 2026-01-20 19:19:53.638281615 +0000 UTC m=+0.025076149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:19:53 np0005589310 systemd[1]: Started libpod-conmon-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope.
Jan 20 14:19:53 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:19:53 np0005589310 podman[239449]: 2026-01-20 19:19:53.790387237 +0000 UTC m=+0.177181791 container init 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:19:53 np0005589310 podman[239449]: 2026-01-20 19:19:53.796656169 +0000 UTC m=+0.183450703 container start 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:19:53 np0005589310 podman[239449]: 2026-01-20 19:19:53.801039166 +0000 UTC m=+0.187833690 container attach 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:19:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:53 np0005589310 systemd[1]: session-50.scope: Deactivated successfully.
Jan 20 14:19:53 np0005589310 systemd[1]: session-50.scope: Consumed 1min 56.749s CPU time.
Jan 20 14:19:53 np0005589310 systemd-logind[797]: Session 50 logged out. Waiting for processes to exit.
Jan 20 14:19:53 np0005589310 systemd-logind[797]: Removed session 50.
Jan 20 14:19:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-045b638c8013481ec7cd0ca172aadaf40382edda30127f7190a425edd5b1becf-merged.mount: Deactivated successfully.
Jan 20 14:19:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3-merged.mount: Deactivated successfully.
Jan 20 14:19:54 np0005589310 lvm[239547]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:19:54 np0005589310 lvm[239547]: VG ceph_vg1 finished
Jan 20 14:19:54 np0005589310 lvm[239546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:19:54 np0005589310 lvm[239546]: VG ceph_vg0 finished
Jan 20 14:19:54 np0005589310 lvm[239549]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:19:54 np0005589310 lvm[239549]: VG ceph_vg2 finished
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.503 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 20 14:19:54 np0005589310 lvm[239551]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:19:54 np0005589310 lvm[239551]: VG ceph_vg2 finished
Jan 20 14:19:54 np0005589310 affectionate_kowalevski[239466]: {}
Jan 20 14:19:54 np0005589310 lvm[239553]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:19:54 np0005589310 lvm[239553]: VG ceph_vg2 finished
Jan 20 14:19:54 np0005589310 systemd[1]: libpod-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Deactivated successfully.
Jan 20 14:19:54 np0005589310 systemd[1]: libpod-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Consumed 1.310s CPU time.
Jan 20 14:19:54 np0005589310 podman[239449]: 2026-01-20 19:19:54.570152962 +0000 UTC m=+0.956947516 container died 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:19:54 np0005589310 systemd[1]: var-lib-containers-storage-overlay-82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778-merged.mount: Deactivated successfully.
Jan 20 14:19:54 np0005589310 podman[239449]: 2026-01-20 19:19:54.622738279 +0000 UTC m=+1.009532813 container remove 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:19:54 np0005589310 systemd[1]: libpod-conmon-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Deactivated successfully.
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.657 239044 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:19:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.670 239044 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:19:54 np0005589310 nova_compute[239038]: 2026-01-20 19:19:54.671 239044 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 20 14:19:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:54 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:19:54 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.089 239044 INFO nova.virt.driver [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.194 239044 INFO nova.compute.provider_config [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.219 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.219 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 WARNING oslo_config.cfg [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 14:19:55 np0005589310 nova_compute[239038]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 14:19:55 np0005589310 nova_compute[239038]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 14:19:55 np0005589310 nova_compute[239038]: and ``live_migration_inbound_addr`` respectively.
Jan 20 14:19:55 np0005589310 nova_compute[239038]: ).  Its value may be silently ignored in the future.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_secret_uuid        = 90fff835-31df-513f-a409-b6642f04e6ac log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.364 239044 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.377 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.377 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.378 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.378 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.391 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdd64e57250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.393 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdd64e57250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.394 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.401 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <host>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <uuid>6fed1acb-e03a-4246-8d49-1248ad1fe57b</uuid>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <arch>x86_64</arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model>EPYC-Rome-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <vendor>AMD</vendor>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <microcode version='16777317'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <signature family='23' model='49' stepping='0'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='x2apic'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='tsc-deadline'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='osxsave'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='hypervisor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='tsc_adjust'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='spec-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='stibp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='arch-capabilities'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='cmp_legacy'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='topoext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='virt-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='lbrv'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='tsc-scale'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='vmcb-clean'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='pause-filter'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='pfthreshold'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='svme-addr-chk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='rdctl-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='skip-l1dfl-vmentry'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='mds-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature name='pschange-mc-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <pages unit='KiB' size='4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <pages unit='KiB' size='2048'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <pages unit='KiB' size='1048576'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <power_management>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <suspend_mem/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </power_management>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <iommu support='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <migration_features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <live/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <uri_transports>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <uri_transport>tcp</uri_transport>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <uri_transport>rdma</uri_transport>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </uri_transports>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </migration_features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <topology>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <cells num='1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <cell id='0'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <memory unit='KiB'>7864312</memory>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <pages unit='KiB' size='2048'>0</pages>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <distances>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <sibling id='0' value='10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          </distances>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          <cpus num='8'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:          </cpus>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        </cell>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </cells>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </topology>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <cache>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </cache>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <secmodel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model>selinux</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <doi>0</doi>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </secmodel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <secmodel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model>dac</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <doi>0</doi>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </secmodel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </host>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <guest>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <os_type>hvm</os_type>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <arch name='i686'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <wordsize>32</wordsize>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <domain type='qemu'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <domain type='kvm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <pae/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <nonpae/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <acpi default='on' toggle='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <apic default='on' toggle='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <cpuselection/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <deviceboot/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <disksnapshot default='on' toggle='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <externalSnapshot/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </guest>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <guest>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <os_type>hvm</os_type>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <arch name='x86_64'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <wordsize>64</wordsize>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <domain type='qemu'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <domain type='kvm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <acpi default='on' toggle='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <apic default='on' toggle='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <cpuselection/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <deviceboot/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <disksnapshot default='on' toggle='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <externalSnapshot/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </guest>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 
Jan 20 14:19:55 np0005589310 nova_compute[239038]: </capabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: #033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.408 239044 WARNING nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.409 239044 DEBUG nova.virt.libvirt.volume.mount [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.414 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.431 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 14:19:55 np0005589310 nova_compute[239038]: <domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <domain>kvm</domain>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <arch>i686</arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <vcpu max='4096'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <iothreads supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <os supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='firmware'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <loader supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>rom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pflash</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='readonly'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>yes</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='secure'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </loader>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </os>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-passthrough' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='hostPassthroughMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='maximum' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='maximumMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-model' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <vendor>AMD</vendor>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='x2apic'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-deadline'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='hypervisor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc_adjust'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='spec-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='stibp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='cmp_legacy'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='overflow-recov'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='succor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='amd-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='virt-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lbrv'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-scale'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='vmcb-clean'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='flushbyasid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pause-filter'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pfthreshold'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='disable' name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='custom' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Dhyana-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v6'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v7'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <memoryBacking supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='sourceType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>anonymous</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>memfd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </memoryBacking>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <disk supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='diskDevice'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>disk</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cdrom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>floppy</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>lun</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>fdc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>sata</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </disk>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <graphics supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vnc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egl-headless</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </graphics>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <video supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='modelType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vga</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cirrus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>none</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>bochs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ramfb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </video>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hostdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='mode'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>subsystem</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='startupPolicy'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>mandatory</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>requisite</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>optional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='subsysType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pci</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='capsType'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='pciBackend'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hostdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <rng supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>random</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </rng>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <filesystem supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='driverType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>path</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>handle</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtiofs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </filesystem>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tpm supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-tis</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-crb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emulator</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>external</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendVersion'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>2.0</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </tpm>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <redirdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </redirdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <channel supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </channel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <crypto supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </crypto>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <interface supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>passt</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </interface>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <panic supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>isa</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>hyperv</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </panic>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <console supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>null</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dev</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pipe</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stdio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>udp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tcp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu-vdagent</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </console>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <gic supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <vmcoreinfo supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <genid supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backingStoreInput supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backup supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <async-teardown supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <s390-pv supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <ps2 supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tdx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sev supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sgx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hyperv supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='features'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>relaxed</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vapic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>spinlocks</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vpindex</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>runtime</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>synic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stimer</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reset</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vendor_id</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>frequencies</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reenlightenment</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tlbflush</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ipi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>avic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emsr_bitmap</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>xmm_input</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <spinlocks>4095</spinlocks>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <stimer_direct>on</stimer_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hyperv>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <launchSecurity supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: </domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.450 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 14:19:55 np0005589310 nova_compute[239038]: <domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <domain>kvm</domain>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <arch>i686</arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <vcpu max='240'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <iothreads supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <os supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='firmware'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <loader supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>rom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pflash</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='readonly'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>yes</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='secure'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </loader>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </os>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-passthrough' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='hostPassthroughMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='maximum' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='maximumMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-model' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <vendor>AMD</vendor>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='x2apic'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-deadline'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='hypervisor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc_adjust'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='spec-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='stibp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='cmp_legacy'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='overflow-recov'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='succor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='amd-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='virt-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lbrv'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-scale'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='vmcb-clean'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='flushbyasid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pause-filter'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pfthreshold'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='disable' name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='custom' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Dhyana-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v6'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v7'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <memoryBacking supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='sourceType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>anonymous</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>memfd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </memoryBacking>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <disk supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='diskDevice'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>disk</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cdrom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>floppy</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>lun</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ide</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>fdc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>sata</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </disk>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <graphics supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vnc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egl-headless</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </graphics>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <video supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='modelType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vga</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cirrus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>none</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>bochs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ramfb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </video>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hostdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='mode'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>subsystem</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='startupPolicy'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>mandatory</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>requisite</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>optional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='subsysType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pci</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='capsType'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='pciBackend'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hostdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <rng supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>random</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </rng>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <filesystem supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='driverType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>path</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>handle</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtiofs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </filesystem>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tpm supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-tis</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-crb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emulator</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>external</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendVersion'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>2.0</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </tpm>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <redirdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </redirdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <channel supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </channel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <crypto supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </crypto>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <interface supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>passt</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </interface>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <panic supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>isa</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>hyperv</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </panic>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <console supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>null</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dev</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pipe</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stdio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>udp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tcp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu-vdagent</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </console>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <gic supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <vmcoreinfo supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <genid supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backingStoreInput supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backup supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <async-teardown supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <s390-pv supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <ps2 supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tdx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sev supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sgx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hyperv supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='features'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>relaxed</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vapic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>spinlocks</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vpindex</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>runtime</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>synic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stimer</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reset</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vendor_id</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>frequencies</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reenlightenment</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tlbflush</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ipi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>avic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emsr_bitmap</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>xmm_input</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <spinlocks>4095</spinlocks>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <stimer_direct>on</stimer_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hyperv>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <launchSecurity supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: </domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.498 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.503 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 14:19:55 np0005589310 nova_compute[239038]: <domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <domain>kvm</domain>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <arch>x86_64</arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <vcpu max='240'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <iothreads supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <os supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='firmware'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <loader supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>rom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pflash</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='readonly'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>yes</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='secure'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </loader>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </os>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-passthrough' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='hostPassthroughMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='maximum' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='maximumMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-model' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <vendor>AMD</vendor>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='x2apic'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-deadline'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='hypervisor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc_adjust'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='spec-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='stibp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='cmp_legacy'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='overflow-recov'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='succor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='amd-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='virt-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lbrv'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-scale'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='vmcb-clean'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='flushbyasid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pause-filter'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pfthreshold'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='disable' name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='custom' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Dhyana-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v6'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v7'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <memoryBacking supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='sourceType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>anonymous</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>memfd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </memoryBacking>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <disk supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='diskDevice'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>disk</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cdrom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>floppy</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>lun</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ide</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>fdc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>sata</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </disk>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <graphics supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vnc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egl-headless</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </graphics>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <video supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='modelType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vga</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cirrus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>none</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>bochs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ramfb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </video>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hostdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='mode'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>subsystem</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='startupPolicy'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>mandatory</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>requisite</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>optional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='subsysType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pci</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='capsType'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='pciBackend'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hostdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <rng supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>random</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </rng>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <filesystem supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='driverType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>path</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>handle</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtiofs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </filesystem>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tpm supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-tis</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-crb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emulator</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>external</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendVersion'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>2.0</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </tpm>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <redirdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </redirdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <channel supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </channel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <crypto supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </crypto>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <interface supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>passt</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </interface>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <panic supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>isa</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>hyperv</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </panic>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <console supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>null</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dev</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pipe</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stdio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>udp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tcp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu-vdagent</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </console>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <gic supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <vmcoreinfo supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <genid supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backingStoreInput supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backup supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <async-teardown supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <s390-pv supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <ps2 supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tdx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sev supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sgx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hyperv supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='features'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>relaxed</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vapic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>spinlocks</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vpindex</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>runtime</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>synic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stimer</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reset</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vendor_id</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>frequencies</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reenlightenment</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tlbflush</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ipi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>avic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emsr_bitmap</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>xmm_input</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <spinlocks>4095</spinlocks>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <stimer_direct>on</stimer_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hyperv>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <launchSecurity supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: </domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.582 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 14:19:55 np0005589310 nova_compute[239038]: <domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <path>/usr/libexec/qemu-kvm</path>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <domain>kvm</domain>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <arch>x86_64</arch>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <vcpu max='4096'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <iothreads supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <os supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='firmware'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>efi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <loader supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>rom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pflash</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='readonly'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>yes</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='secure'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>yes</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>no</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </loader>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </os>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-passthrough' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='hostPassthroughMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='maximum' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='maximumMigratable'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>on</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>off</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='host-model' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <vendor>AMD</vendor>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='x2apic'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-deadline'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='hypervisor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc_adjust'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='spec-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='stibp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='cmp_legacy'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='overflow-recov'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='succor'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='amd-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='virt-ssbd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lbrv'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='tsc-scale'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='vmcb-clean'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='flushbyasid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pause-filter'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='pfthreshold'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='svme-addr-chk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <feature policy='disable' name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <mode name='custom' supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Broadwell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cascadelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='ClearwaterForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ddpd-u'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sha512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm3'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sm4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Cooperlake-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Denverton-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Dhyana-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Genoa-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Milan-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Rome-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-Turin-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amd-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='auto-ibrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vp2intersect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fs-gs-base-ns'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibpb-brtype'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='no-nested-data-bp'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='null-sel-clr-base'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='perfmon-v2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbpb'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='srso-user-kernel-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='stibp-always-on'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='EPYC-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='GraniteRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-128'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-256'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx10-512'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='prefetchiti'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Haswell-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-noTSX'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v6'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Icelake-Server-v7'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='IvyBridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='KnightsMill-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4fmaps'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-4vnniw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512er'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512pf'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G4-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Opteron_G5-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fma4'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tbm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xop'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SapphireRapids-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='amx-tile'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-bf16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-fp16'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512-vpopcntdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bitalg'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vbmi2'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrc'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fzrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='la57'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='taa-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='tsx-ldtrk'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='SierraForest-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ifma'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-ne-convert'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx-vnni-int8'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bhi-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='bus-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cmpccxadd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fbsdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='fsrs'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ibrs-all'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='intel-psfd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ipred-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='lam'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mcdt-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pbrsb-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='psdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rrsba-ctrl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='sbdr-ssdp-no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='serialize'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vaes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='vpclmulqdq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Client-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='hle'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='rtm'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Skylake-Server-v5'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512bw'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512cd'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512dq'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512f'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='avx512vl'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='invpcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pcid'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='pku'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='mpx'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v2'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v3'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='core-capability'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='split-lock-detect'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='Snowridge-v4'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='cldemote'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='erms'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='gfni'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdir64b'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='movdiri'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='xsaves'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='athlon-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='core2duo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='coreduo-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='n270-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='ss'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <blockers model='phenom-v1'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnow'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <feature name='3dnowext'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </blockers>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </mode>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </cpu>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <memoryBacking supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <enum name='sourceType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>anonymous</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <value>memfd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </memoryBacking>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <disk supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='diskDevice'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>disk</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cdrom</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>floppy</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>lun</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>fdc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>sata</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </disk>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <graphics supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vnc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egl-headless</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </graphics>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <video supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='modelType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vga</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>cirrus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>none</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>bochs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ramfb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </video>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hostdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='mode'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>subsystem</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='startupPolicy'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>mandatory</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>requisite</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>optional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='subsysType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pci</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>scsi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='capsType'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='pciBackend'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hostdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <rng supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtio-non-transitional</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>random</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>egd</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </rng>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <filesystem supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='driverType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>path</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>handle</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>virtiofs</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </filesystem>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tpm supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-tis</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tpm-crb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emulator</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>external</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendVersion'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>2.0</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </tpm>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <redirdev supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='bus'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>usb</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </redirdev>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <channel supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </channel>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <crypto supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendModel'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>builtin</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </crypto>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <interface supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='backendType'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>default</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>passt</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </interface>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <panic supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='model'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>isa</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>hyperv</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </panic>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <console supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='type'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>null</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vc</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pty</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dev</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>file</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>pipe</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stdio</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>udp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tcp</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>unix</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>qemu-vdagent</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>dbus</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </console>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </devices>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  <features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <gic supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <vmcoreinfo supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <genid supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backingStoreInput supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <backup supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <async-teardown supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <s390-pv supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <ps2 supported='yes'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <tdx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sev supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <sgx supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <hyperv supported='yes'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <enum name='features'>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>relaxed</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vapic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>spinlocks</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vpindex</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>runtime</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>synic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>stimer</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reset</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>vendor_id</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>frequencies</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>reenlightenment</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>tlbflush</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>ipi</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>avic</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>emsr_bitmap</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <value>xmm_input</value>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </enum>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      <defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <spinlocks>4095</spinlocks>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <stimer_direct>on</stimer_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_direct>on</tlbflush_direct>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <tlbflush_extended>on</tlbflush_extended>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:      </defaults>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    </hyperv>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:    <launchSecurity supported='no'/>
Jan 20 14:19:55 np0005589310 nova_compute[239038]:  </features>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: </domainCapabilities>
Jan 20 14:19:55 np0005589310 nova_compute[239038]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.666 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Secure Boot support detected#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.668 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.668 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.675 239044 DEBUG nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.710 239044 INFO nova.virt.node [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Determined node identity 178956bf-6050-42b7-876f-3f96271cf4ff from /var/lib/nova/compute_id#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.729 239044 WARNING nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute nodes ['178956bf-6050-42b7-876f-3f96271cf4ff'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.762 239044 INFO nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.792 239044 WARNING nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.792 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:19:55 np0005589310 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:19:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:19:56 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114600400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.336 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:19:56 np0005589310 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 14:19:56 np0005589310 systemd[1]: Started libvirt nodedev daemon.
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.647 239044 WARNING nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.648 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.649 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.649 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.664 239044 WARNING nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] No compute node record for compute-0.ctlplane.example.com:178956bf-6050-42b7-876f-3f96271cf4ff: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 178956bf-6050-42b7-876f-3f96271cf4ff could not be found.#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.680 239044 INFO nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 178956bf-6050-42b7-876f-3f96271cf4ff#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.735 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:19:56 np0005589310 nova_compute[239038]: 2026-01-20 19:19:56.735 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:19:57 np0005589310 nova_compute[239038]: 2026-01-20 19:19:57.574 239044 INFO nova.scheduler.client.report [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] [req-0aabd44f-9296-4a48-bff3-e34edea8db97] Created resource provider record via placement API for resource provider with UUID 178956bf-6050-42b7-876f-3f96271cf4ff and name compute-0.ctlplane.example.com.#033[00m
Jan 20 14:19:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:19:57 np0005589310 nova_compute[239038]: 2026-01-20 19:19:57.985 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:19:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:19:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012488376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.516 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.521 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 20 14:19:58 np0005589310 nova_compute[239038]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.521 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.522 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.523 239044 DEBUG nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.scheduler.client.report [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updated inventory for provider 178956bf-6050-42b7-876f-3f96271cf4ff with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating resource provider 178956bf-6050-42b7-876f-3f96271cf4ff generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.741 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating resource provider 178956bf-6050-42b7-876f-3f96271cf4ff generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG nova.service [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.871 239044 DEBUG nova.service [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 20 14:19:58 np0005589310 nova_compute[239038]: 2026-01-20 19:19:58.872 239044 DEBUG nova.servicegroup.drivers.db [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 20 14:19:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:19:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:20:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:20:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:20:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:08 np0005589310 podman[239685]: 2026-01-20 19:20:08.08217267 +0000 UTC m=+0.086470389 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:20:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:09 np0005589310 podman[239712]: 2026-01-20 19:20:09.377790295 +0000 UTC m=+0.058532641 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 14:20:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:20:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:20:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:20:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:20:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:25 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:27 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:29 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:20:31
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:20:31 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:33 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:20:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:20:35 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:37 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:38 np0005589310 podman[239732]: 2026-01-20 19:20:38.406181773 +0000 UTC m=+0.077757587 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 14:20:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:39 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:40 np0005589310 podman[239758]: 2026-01-20 19:20:40.386348705 +0000 UTC m=+0.056238677 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:20:41 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:43 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:20:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:20:45 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:46 np0005589310 nova_compute[239038]: 2026-01-20 19:20:46.874 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:47 np0005589310 nova_compute[239038]: 2026-01-20 19:20:47.014 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:47 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 14:20:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981231077' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 14:20:49 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 14:20:49 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 14:20:49 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 14:20:49 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:51 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:53 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.684 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.698 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.698 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.700 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.721 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:20:54 np0005589310 nova_compute[239038]: 2026-01-20 19:20:54.722 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031378403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.286 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.442 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:20:55 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.537 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.537 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:20:55 np0005589310 nova_compute[239038]: 2026-01-20 19:20:55.557 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.811764473 +0000 UTC m=+0.041552560 container create 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:55 np0005589310 systemd[1]: Started libpod-conmon-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope.
Jan 20 14:20:55 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.792519521 +0000 UTC m=+0.022307627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.904227055 +0000 UTC m=+0.134015161 container init 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.911384747 +0000 UTC m=+0.141172833 container start 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.915468986 +0000 UTC m=+0.145257092 container attach 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:20:55 np0005589310 elastic_einstein[239977]: 167 167
Jan 20 14:20:55 np0005589310 systemd[1]: libpod-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope: Deactivated successfully.
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.918497918 +0000 UTC m=+0.148285994 container died 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:20:55 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a9bd36c412d7c13510533ba24ce4392f2a174be136c4bd48d12ce171f7e17910-merged.mount: Deactivated successfully.
Jan 20 14:20:55 np0005589310 podman[239961]: 2026-01-20 19:20:55.957833283 +0000 UTC m=+0.187621369 container remove 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:20:55 np0005589310 systemd[1]: libpod-conmon-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope: Deactivated successfully.
Jan 20 14:20:55 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:56 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:20:56 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836286012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.118951655 +0000 UTC m=+0.049162612 container create f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:20:56 np0005589310 nova_compute[239038]: 2026-01-20 19:20:56.128 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:20:56 np0005589310 nova_compute[239038]: 2026-01-20 19:20:56.136 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:20:56 np0005589310 nova_compute[239038]: 2026-01-20 19:20:56.156 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:20:56 np0005589310 systemd[1]: Started libpod-conmon-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope.
Jan 20 14:20:56 np0005589310 nova_compute[239038]: 2026-01-20 19:20:56.157 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:20:56 np0005589310 nova_compute[239038]: 2026-01-20 19:20:56.157 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:20:56 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:56 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.098702229 +0000 UTC m=+0.028913216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.194822008 +0000 UTC m=+0.125032975 container init f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.201848577 +0000 UTC m=+0.132059534 container start f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.205197258 +0000 UTC m=+0.135408215 container attach f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:20:56 np0005589310 competent_haibt[240021]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:20:56 np0005589310 competent_haibt[240021]: --> All data devices are unavailable
Jan 20 14:20:56 np0005589310 systemd[1]: libpod-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope: Deactivated successfully.
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.678880401 +0000 UTC m=+0.609091358 container died f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:20:56 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932-merged.mount: Deactivated successfully.
Jan 20 14:20:56 np0005589310 podman[240003]: 2026-01-20 19:20:56.744481028 +0000 UTC m=+0.674691985 container remove f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:20:56 np0005589310 systemd[1]: libpod-conmon-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope: Deactivated successfully.
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.23688214 +0000 UTC m=+0.042581364 container create 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 14:20:57 np0005589310 systemd[1]: Started libpod-conmon-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope.
Jan 20 14:20:57 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.217154916 +0000 UTC m=+0.022854170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.341989326 +0000 UTC m=+0.147688610 container init 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.351208048 +0000 UTC m=+0.156907292 container start 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.354735033 +0000 UTC m=+0.160434317 container attach 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:20:57 np0005589310 thirsty_sammet[240133]: 167 167
Jan 20 14:20:57 np0005589310 systemd[1]: libpod-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope: Deactivated successfully.
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.357712754 +0000 UTC m=+0.163411988 container died 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 14:20:57 np0005589310 systemd[1]: var-lib-containers-storage-overlay-23640d1a69d3708fe6ce3aa85a7495cf3b203d618ea468f3aeff9d9edaa06ee6-merged.mount: Deactivated successfully.
Jan 20 14:20:57 np0005589310 podman[240116]: 2026-01-20 19:20:57.43537809 +0000 UTC m=+0.241077334 container remove 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:20:57 np0005589310 systemd[1]: libpod-conmon-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope: Deactivated successfully.
Jan 20 14:20:57 np0005589310 podman[240159]: 2026-01-20 19:20:57.634075345 +0000 UTC m=+0.048919026 container create c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:20:57 np0005589310 systemd[1]: Started libpod-conmon-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope.
Jan 20 14:20:57 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:57 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:57 np0005589310 podman[240159]: 2026-01-20 19:20:57.613422589 +0000 UTC m=+0.028266320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:57 np0005589310 podman[240159]: 2026-01-20 19:20:57.71208866 +0000 UTC m=+0.126932361 container init c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:20:57 np0005589310 podman[240159]: 2026-01-20 19:20:57.720176205 +0000 UTC m=+0.135019886 container start c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:20:57 np0005589310 podman[240159]: 2026-01-20 19:20:57.724085538 +0000 UTC m=+0.138929219 container attach c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]: {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    "0": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "devices": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "/dev/loop3"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            ],
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_name": "ceph_lv0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_size": "21470642176",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "name": "ceph_lv0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "tags": {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_name": "ceph",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.crush_device_class": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.encrypted": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.objectstore": "bluestore",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_id": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.vdo": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.with_tpm": "0"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            },
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "vg_name": "ceph_vg0"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        }
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    ],
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    "1": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "devices": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "/dev/loop4"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            ],
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_name": "ceph_lv1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_size": "21470642176",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "name": "ceph_lv1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "tags": {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_name": "ceph",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.crush_device_class": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.encrypted": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.objectstore": "bluestore",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_id": "1",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.vdo": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.with_tpm": "0"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            },
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "vg_name": "ceph_vg1"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        }
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    ],
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    "2": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "devices": [
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "/dev/loop5"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            ],
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_name": "ceph_lv2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_size": "21470642176",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "name": "ceph_lv2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "tags": {
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.cluster_name": "ceph",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.crush_device_class": "",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.encrypted": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.objectstore": "bluestore",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osd_id": "2",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.vdo": "0",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:                "ceph.with_tpm": "0"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            },
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "type": "block",
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:            "vg_name": "ceph_vg2"
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:        }
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]:    ]
Jan 20 14:20:57 np0005589310 pensive_poitras[240175]: }
Jan 20 14:20:57 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:58 np0005589310 systemd[1]: libpod-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope: Deactivated successfully.
Jan 20 14:20:58 np0005589310 podman[240159]: 2026-01-20 19:20:58.015874361 +0000 UTC m=+0.430718052 container died c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:58 np0005589310 systemd[1]: var-lib-containers-storage-overlay-be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e-merged.mount: Deactivated successfully.
Jan 20 14:20:58 np0005589310 podman[240159]: 2026-01-20 19:20:58.071409125 +0000 UTC m=+0.486252806 container remove c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:20:58 np0005589310 systemd[1]: libpod-conmon-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope: Deactivated successfully.
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.540021146 +0000 UTC m=+0.038665699 container create 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:58 np0005589310 systemd[1]: Started libpod-conmon-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope.
Jan 20 14:20:58 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.521977042 +0000 UTC m=+0.020621615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.623497593 +0000 UTC m=+0.122142166 container init 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.631200357 +0000 UTC m=+0.129844900 container start 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.635932531 +0000 UTC m=+0.134577194 container attach 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:20:58 np0005589310 practical_gauss[240274]: 167 167
Jan 20 14:20:58 np0005589310 systemd[1]: libpod-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope: Deactivated successfully.
Jan 20 14:20:58 np0005589310 conmon[240274]: conmon 18eedb9b2479e447dc73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope/container/memory.events
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.640959172 +0000 UTC m=+0.139603745 container died 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 14:20:58 np0005589310 systemd[1]: var-lib-containers-storage-overlay-6a59be3d97d7b732fa7622db56ab59d54fcd8aebb74663b73ac0653a0e288415-merged.mount: Deactivated successfully.
Jan 20 14:20:58 np0005589310 podman[240258]: 2026-01-20 19:20:58.752650896 +0000 UTC m=+0.251295459 container remove 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 14:20:58 np0005589310 systemd[1]: libpod-conmon-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope: Deactivated successfully.
Jan 20 14:20:58 np0005589310 podman[240297]: 2026-01-20 19:20:58.913128763 +0000 UTC m=+0.045991077 container create 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:20:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:20:58 np0005589310 systemd[1]: Started libpod-conmon-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope.
Jan 20 14:20:58 np0005589310 podman[240297]: 2026-01-20 19:20:58.892675611 +0000 UTC m=+0.025537925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:20:59 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:20:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:59 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:20:59 np0005589310 podman[240297]: 2026-01-20 19:20:59.056836806 +0000 UTC m=+0.189699130 container init 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:59 np0005589310 podman[240297]: 2026-01-20 19:20:59.068075476 +0000 UTC m=+0.200937780 container start 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 14:20:59 np0005589310 podman[240297]: 2026-01-20 19:20:59.072477471 +0000 UTC m=+0.205339785 container attach 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 14:20:59 np0005589310 lvm[240392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:20:59 np0005589310 lvm[240392]: VG ceph_vg1 finished
Jan 20 14:20:59 np0005589310 lvm[240391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:20:59 np0005589310 lvm[240391]: VG ceph_vg0 finished
Jan 20 14:20:59 np0005589310 lvm[240394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:20:59 np0005589310 lvm[240394]: VG ceph_vg2 finished
Jan 20 14:20:59 np0005589310 brave_darwin[240313]: {}
Jan 20 14:20:59 np0005589310 systemd[1]: libpod-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Deactivated successfully.
Jan 20 14:20:59 np0005589310 systemd[1]: libpod-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Consumed 1.315s CPU time.
Jan 20 14:20:59 np0005589310 podman[240297]: 2026-01-20 19:20:59.846005731 +0000 UTC m=+0.978868025 container died 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:20:59 np0005589310 systemd[1]: var-lib-containers-storage-overlay-68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc-merged.mount: Deactivated successfully.
Jan 20 14:20:59 np0005589310 podman[240297]: 2026-01-20 19:20:59.886236228 +0000 UTC m=+1.019098522 container remove 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:20:59 np0005589310 systemd[1]: libpod-conmon-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Deactivated successfully.
Jan 20 14:20:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:20:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:20:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:20:59 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:20:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:21:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:21:00 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:21:01 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:03 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:21:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.448 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:21:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.448 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:21:05 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:07 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.938712) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936868938792, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1218, "num_deletes": 505, "total_data_size": 1360323, "memory_usage": 1392080, "flush_reason": "Manual Compaction"}
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936868955317, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1336199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13645, "largest_seqno": 14862, "table_properties": {"data_size": 1330829, "index_size": 2318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14112, "raw_average_key_size": 17, "raw_value_size": 1318053, "raw_average_value_size": 1679, "num_data_blocks": 106, "num_entries": 785, "num_filter_entries": 785, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936784, "oldest_key_time": 1768936784, "file_creation_time": 1768936868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16883 microseconds, and 6109 cpu microseconds.
Jan 20 14:21:08 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.955592) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1336199 bytes OK
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.955632) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000183) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000272) EVENT_LOG_v1 {"time_micros": 1768936869000256, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1353668, prev total WAL file size 1353668, number of live WAL files 2.
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.001682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1304KB)], [32(7749KB)]
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869001743, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9271246, "oldest_snapshot_seqno": -1}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3823 keys, 7353896 bytes, temperature: kUnknown
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869117003, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7353896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7326665, "index_size": 16561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93646, "raw_average_key_size": 24, "raw_value_size": 7255794, "raw_average_value_size": 1897, "num_data_blocks": 701, "num_entries": 3823, "num_filter_entries": 3823, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936869, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.117774) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7353896 bytes
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.119438) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.1 rd, 63.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.6 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(12.4) write-amplify(5.5) OK, records in: 4846, records dropped: 1023 output_compression: NoCompression
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.119489) EVENT_LOG_v1 {"time_micros": 1768936869119457, "job": 14, "event": "compaction_finished", "compaction_time_micros": 115809, "compaction_time_cpu_micros": 32001, "output_level": 6, "num_output_files": 1, "total_output_size": 7353896, "num_input_records": 4846, "num_output_records": 3823, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869119979, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869121847, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.001468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:21:09 np0005589310 podman[240435]: 2026-01-20 19:21:09.442556408 +0000 UTC m=+0.114063462 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 14:21:09 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:11 np0005589310 podman[240461]: 2026-01-20 19:21:11.371532643 +0000 UTC m=+0.046608212 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 14:21:11 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:13 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 14:21:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378073320' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 14:21:14 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 14:21:14 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 14:21:14 np0005589310 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 14:21:15 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:17 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:19 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:21 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:23 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:21:31
Jan 20 14:21:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:21:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:21:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 20 14:21:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:21:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:21:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:21:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:40 np0005589310 podman[240482]: 2026-01-20 19:21:40.404951698 +0000 UTC m=+0.079133803 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:21:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:42 np0005589310 podman[240508]: 2026-01-20 19:21:42.389487429 +0000 UTC m=+0.055858874 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 14:21:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:21:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:21:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:21:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:21:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:21:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:21:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:21:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.149 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.150 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.181 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.181 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.182 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.198 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.198 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.712 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:21:56 np0005589310 nova_compute[239038]: 2026-01-20 19:21:56.714 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:21:57 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:21:57 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838349769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.260 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.413 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.481 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.481 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:21:57 np0005589310 nova_compute[239038]: 2026-01-20 19:21:57.494 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:21:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:21:58 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209206065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:21:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:21:58 np0005589310 nova_compute[239038]: 2026-01-20 19:21:58.033 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:21:58 np0005589310 nova_compute[239038]: 2026-01-20 19:21:58.041 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:21:58 np0005589310 nova_compute[239038]: 2026-01-20 19:21:58.060 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:21:58 np0005589310 nova_compute[239038]: 2026-01-20 19:21:58.061 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:21:58 np0005589310 nova_compute[239038]: 2026-01-20 19:21:58.061 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:21:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:22:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.209549967 +0000 UTC m=+0.044295275 container create d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:22:01 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:22:01 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:01 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:22:01 np0005589310 systemd[1]: Started libpod-conmon-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope.
Jan 20 14:22:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.188296766 +0000 UTC m=+0.023042104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.295330559 +0000 UTC m=+0.130075887 container init d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.307351407 +0000 UTC m=+0.142096715 container start d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.310619716 +0000 UTC m=+0.145365054 container attach d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 14:22:01 np0005589310 reverent_kare[240731]: 167 167
Jan 20 14:22:01 np0005589310 systemd[1]: libpod-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope: Deactivated successfully.
Jan 20 14:22:01 np0005589310 conmon[240731]: conmon d6f282ef2e1ef2eb1dce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope/container/memory.events
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.316069067 +0000 UTC m=+0.150814375 container died d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:22:01 np0005589310 systemd[1]: var-lib-containers-storage-overlay-688906c9646a1d69c8b74cd24b9eb4c27e0838b0c9b2cd44cab78828a4bf1f70-merged.mount: Deactivated successfully.
Jan 20 14:22:01 np0005589310 podman[240715]: 2026-01-20 19:22:01.355517495 +0000 UTC m=+0.190262803 container remove d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:22:01 np0005589310 systemd[1]: libpod-conmon-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope: Deactivated successfully.
Jan 20 14:22:01 np0005589310 podman[240755]: 2026-01-20 19:22:01.534019094 +0000 UTC m=+0.055012133 container create c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:22:01 np0005589310 systemd[1]: Started libpod-conmon-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope.
Jan 20 14:22:01 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:01 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:01 np0005589310 podman[240755]: 2026-01-20 19:22:01.607670434 +0000 UTC m=+0.128663473 container init c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:22:01 np0005589310 podman[240755]: 2026-01-20 19:22:01.517007745 +0000 UTC m=+0.038000814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:01 np0005589310 podman[240755]: 2026-01-20 19:22:01.616172259 +0000 UTC m=+0.137165298 container start c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:22:01 np0005589310 podman[240755]: 2026-01-20 19:22:01.619712204 +0000 UTC m=+0.140705263 container attach c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:22:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:02 np0005589310 clever_kalam[240772]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:22:02 np0005589310 clever_kalam[240772]: --> All data devices are unavailable
Jan 20 14:22:02 np0005589310 systemd[1]: libpod-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240792]: 2026-01-20 19:22:02.174531297 +0000 UTC m=+0.024461759 container died c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 14:22:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa-merged.mount: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240792]: 2026-01-20 19:22:02.218508493 +0000 UTC m=+0.068438905 container remove c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:22:02 np0005589310 systemd[1]: libpod-conmon-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.665444764 +0000 UTC m=+0.041375195 container create cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:22:02 np0005589310 systemd[1]: Started libpod-conmon-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope.
Jan 20 14:22:02 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.740831385 +0000 UTC m=+0.116761836 container init cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.64823087 +0000 UTC m=+0.024161321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.747607078 +0000 UTC m=+0.123537509 container start cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.750665252 +0000 UTC m=+0.126595683 container attach cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:22:02 np0005589310 objective_merkle[240884]: 167 167
Jan 20 14:22:02 np0005589310 systemd[1]: libpod-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.753340456 +0000 UTC m=+0.129270887 container died cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 14:22:02 np0005589310 systemd[1]: var-lib-containers-storage-overlay-959f2de8dfe2788e51058357ea3f84954a4ea73ea6c7882d2efb0486767ca7ad-merged.mount: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240868]: 2026-01-20 19:22:02.791479972 +0000 UTC m=+0.167410403 container remove cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:22:02 np0005589310 systemd[1]: libpod-conmon-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope: Deactivated successfully.
Jan 20 14:22:02 np0005589310 podman[240908]: 2026-01-20 19:22:02.940551775 +0000 UTC m=+0.035772291 container create 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:22:02 np0005589310 systemd[1]: Started libpod-conmon-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope.
Jan 20 14:22:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:03 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:03.013800415 +0000 UTC m=+0.109021011 container init 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:02.925328579 +0000 UTC m=+0.020549125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:03.021655774 +0000 UTC m=+0.116876290 container start 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:03.025573749 +0000 UTC m=+0.120794315 container attach 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]: {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    "0": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "devices": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "/dev/loop3"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            ],
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_name": "ceph_lv0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_size": "21470642176",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "name": "ceph_lv0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "tags": {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_name": "ceph",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.crush_device_class": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.encrypted": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.objectstore": "bluestore",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_id": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.vdo": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.with_tpm": "0"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            },
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "vg_name": "ceph_vg0"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        }
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    ],
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    "1": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "devices": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "/dev/loop4"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            ],
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_name": "ceph_lv1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_size": "21470642176",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "name": "ceph_lv1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "tags": {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_name": "ceph",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.crush_device_class": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.encrypted": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.objectstore": "bluestore",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_id": "1",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.vdo": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.with_tpm": "0"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            },
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "vg_name": "ceph_vg1"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        }
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    ],
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    "2": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "devices": [
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "/dev/loop5"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            ],
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_name": "ceph_lv2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_size": "21470642176",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "name": "ceph_lv2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "tags": {
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.cluster_name": "ceph",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.crush_device_class": "",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.encrypted": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.objectstore": "bluestore",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osd_id": "2",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.vdo": "0",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:                "ceph.with_tpm": "0"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            },
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "type": "block",
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:            "vg_name": "ceph_vg2"
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:        }
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]:    ]
Jan 20 14:22:03 np0005589310 inspiring_goldwasser[240924]: }
Jan 20 14:22:03 np0005589310 systemd[1]: libpod-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope: Deactivated successfully.
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:03.308214811 +0000 UTC m=+0.403435337 container died 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 14:22:03 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774-merged.mount: Deactivated successfully.
Jan 20 14:22:03 np0005589310 podman[240908]: 2026-01-20 19:22:03.349675587 +0000 UTC m=+0.444896113 container remove 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:22:03 np0005589310 systemd[1]: libpod-conmon-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope: Deactivated successfully.
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.803582015 +0000 UTC m=+0.037252646 container create 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 14:22:03 np0005589310 systemd[1]: Started libpod-conmon-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope.
Jan 20 14:22:03 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.86789181 +0000 UTC m=+0.101562461 container init 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.874475329 +0000 UTC m=+0.108145960 container start 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.877848409 +0000 UTC m=+0.111519100 container attach 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:22:03 np0005589310 inspiring_pasteur[241026]: 167 167
Jan 20 14:22:03 np0005589310 systemd[1]: libpod-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope: Deactivated successfully.
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.879792286 +0000 UTC m=+0.113462907 container died 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.788069092 +0000 UTC m=+0.021739723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:03 np0005589310 systemd[1]: var-lib-containers-storage-overlay-6bb54d631088a87b387c2598d51d53dacece8d8ea03c060b4fbd9c5e884bf4c6-merged.mount: Deactivated successfully.
Jan 20 14:22:03 np0005589310 podman[241009]: 2026-01-20 19:22:03.914592513 +0000 UTC m=+0.148263144 container remove 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:22:03 np0005589310 systemd[1]: libpod-conmon-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope: Deactivated successfully.
Jan 20 14:22:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:04 np0005589310 podman[241048]: 2026-01-20 19:22:04.076534114 +0000 UTC m=+0.027912522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:22:04 np0005589310 podman[241048]: 2026-01-20 19:22:04.193101966 +0000 UTC m=+0.144480354 container create 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 14:22:04 np0005589310 systemd[1]: Started libpod-conmon-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope.
Jan 20 14:22:04 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:22:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:04 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:22:04 np0005589310 podman[241048]: 2026-01-20 19:22:04.287994965 +0000 UTC m=+0.239373383 container init 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:22:04 np0005589310 podman[241048]: 2026-01-20 19:22:04.295875065 +0000 UTC m=+0.247253473 container start 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:22:04 np0005589310 podman[241048]: 2026-01-20 19:22:04.299377029 +0000 UTC m=+0.250755437 container attach 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:04 np0005589310 lvm[241143]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:22:04 np0005589310 lvm[241144]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:22:04 np0005589310 lvm[241144]: VG ceph_vg1 finished
Jan 20 14:22:04 np0005589310 lvm[241143]: VG ceph_vg0 finished
Jan 20 14:22:04 np0005589310 lvm[241146]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:22:04 np0005589310 lvm[241146]: VG ceph_vg2 finished
Jan 20 14:22:05 np0005589310 practical_volhard[241065]: {}
Jan 20 14:22:05 np0005589310 systemd[1]: libpod-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Deactivated successfully.
Jan 20 14:22:05 np0005589310 systemd[1]: libpod-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Consumed 1.304s CPU time.
Jan 20 14:22:05 np0005589310 podman[241048]: 2026-01-20 19:22:05.072913758 +0000 UTC m=+1.024292146 container died 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:22:05 np0005589310 systemd[1]: var-lib-containers-storage-overlay-0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484-merged.mount: Deactivated successfully.
Jan 20 14:22:05 np0005589310 podman[241048]: 2026-01-20 19:22:05.200256248 +0000 UTC m=+1.151634636 container remove 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 14:22:05 np0005589310 systemd[1]: libpod-conmon-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Deactivated successfully.
Jan 20 14:22:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:22:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:22:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.447 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:22:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:22:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:22:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:22:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:22:07 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3404 writes, 15K keys, 3404 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3403 writes, 3403 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1309 writes, 5927 keys, 1309 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.8      0.15              0.04         7    0.022       0      0       0.0       0.0#012  L6      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    119.7     99.0      0.43              0.13         6    0.072     24K   3195       0.0       0.0#012 Sum      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     88.3    100.3      0.59              0.17        13    0.045     24K   3195       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     95.5     95.7      0.37              0.10         8    0.046     17K   2463       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    119.7     99.0      0.43              0.13         6    0.072     24K   3195       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.1      0.12              0.04         6    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.016, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 308.00 MB usage: 1.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(106,1.60 MB,0.520032%) FilterBlock(14,74.73 KB,0.0236957%) IndexBlock(14,152.67 KB,0.048407%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 20 14:22:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:08 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.687 154796 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:2e:45', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:02:c4:e7:e3:a1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 20 14:22:08 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.688 154796 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 20 14:22:08 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.689 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 20 14:22:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:11 np0005589310 podman[241187]: 2026-01-20 19:22:11.42299082 +0000 UTC m=+0.088438477 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 14:22:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:13 np0005589310 podman[241213]: 2026-01-20 19:22:13.386950416 +0000 UTC m=+0.050705769 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 14:22:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:24 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:22:31
Jan 20 14:22:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:22:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:22:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Jan 20 14:22:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:22:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:22:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:22:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:42 np0005589310 podman[241232]: 2026-01-20 19:22:42.461285844 +0000 UTC m=+0.128594852 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:22:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:44 np0005589310 podman[241258]: 2026-01-20 19:22:44.380191867 +0000 UTC m=+0.051611792 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:22:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:22:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:22:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:22:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:22:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:22:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.061 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.061 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.062 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:22:56 np0005589310 nova_compute[239038]: 2026-01-20 19:22:56.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:22:57 np0005589310 nova_compute[239038]: 2026-01-20 19:22:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:57 np0005589310 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:57 np0005589310 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:57 np0005589310 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:22:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.684 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.708 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:22:58 np0005589310 nova_compute[239038]: 2026-01-20 19:22:58.710 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:22:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:22:59 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:22:59 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93760089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.248 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.427 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.428 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.429 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.429 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.499 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.499 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:22:59 np0005589310 nova_compute[239038]: 2026-01-20 19:22:59.527 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:23:00 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:23:00 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295232245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:23:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:00 np0005589310 nova_compute[239038]: 2026-01-20 19:23:00.062 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:23:00 np0005589310 nova_compute[239038]: 2026-01-20 19:23:00.067 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:23:00 np0005589310 nova_compute[239038]: 2026-01-20 19:23:00.096 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:23:00 np0005589310 nova_compute[239038]: 2026-01-20 19:23:00.097 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:23:00 np0005589310 nova_compute[239038]: 2026-01-20 19:23:00.097 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:23:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:02 np0005589310 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000074s
Jan 20 14:23:02 np0005589310 ceph-osd[88112]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000036s
Jan 20 14:23:02 np0005589310 ceph-osd[86022]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000047s
Jan 20 14:23:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:23:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:23:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:23:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.452812146 +0000 UTC m=+0.037626798 container create a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:06 np0005589310 systemd[1]: Started libpod-conmon-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope.
Jan 20 14:23:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.436953464 +0000 UTC m=+0.021768136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.537174288 +0000 UTC m=+0.121988960 container init a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.544644347 +0000 UTC m=+0.129458999 container start a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.548428319 +0000 UTC m=+0.133243031 container attach a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 14:23:06 np0005589310 exciting_antonelli[241481]: 167 167
Jan 20 14:23:06 np0005589310 systemd[1]: libpod-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope: Deactivated successfully.
Jan 20 14:23:06 np0005589310 conmon[241481]: conmon a681ddd233932eee3f27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope/container/memory.events
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.552943237 +0000 UTC m=+0.137757889 container died a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:06 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:23:06 np0005589310 systemd[1]: var-lib-containers-storage-overlay-c2af17693ced31daa9b7db54034f21e61678271d6a4fd0bc056e1c43e7495e4e-merged.mount: Deactivated successfully.
Jan 20 14:23:06 np0005589310 podman[241465]: 2026-01-20 19:23:06.59125554 +0000 UTC m=+0.176070192 container remove a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:23:06 np0005589310 systemd[1]: libpod-conmon-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope: Deactivated successfully.
Jan 20 14:23:06 np0005589310 podman[241505]: 2026-01-20 19:23:06.779469722 +0000 UTC m=+0.045608999 container create c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:23:06 np0005589310 systemd[1]: Started libpod-conmon-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope.
Jan 20 14:23:06 np0005589310 podman[241505]: 2026-01-20 19:23:06.761321455 +0000 UTC m=+0.027460752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:06 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:06 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:06 np0005589310 podman[241505]: 2026-01-20 19:23:06.879261575 +0000 UTC m=+0.145400872 container init c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:06 np0005589310 podman[241505]: 2026-01-20 19:23:06.886726855 +0000 UTC m=+0.152866132 container start c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:06 np0005589310 podman[241505]: 2026-01-20 19:23:06.895520746 +0000 UTC m=+0.161660023 container attach c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 14:23:07 np0005589310 busy_ritchie[241521]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:23:07 np0005589310 busy_ritchie[241521]: --> All data devices are unavailable
Jan 20 14:23:07 np0005589310 systemd[1]: libpod-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope: Deactivated successfully.
Jan 20 14:23:07 np0005589310 podman[241505]: 2026-01-20 19:23:07.411636355 +0000 UTC m=+0.677775632 container died c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:07 np0005589310 systemd[1]: var-lib-containers-storage-overlay-25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79-merged.mount: Deactivated successfully.
Jan 20 14:23:07 np0005589310 podman[241505]: 2026-01-20 19:23:07.562324064 +0000 UTC m=+0.828463341 container remove c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:23:07 np0005589310 systemd[1]: libpod-conmon-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.017803362 +0000 UTC m=+0.042067154 container create bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:23:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:08 np0005589310 systemd[1]: Started libpod-conmon-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope.
Jan 20 14:23:08 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.087380228 +0000 UTC m=+0.111644050 container init bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.092707106 +0000 UTC m=+0.116970898 container start bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:07.998736852 +0000 UTC m=+0.023000664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.09621384 +0000 UTC m=+0.120477622 container attach bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:23:08 np0005589310 epic_mclean[241635]: 167 167
Jan 20 14:23:08 np0005589310 systemd[1]: libpod-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.098154076 +0000 UTC m=+0.122417878 container died bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:23:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a271f9a005fa7fbfe099ab564f71804379ae6d39e86f0341d9e962f855b15f88-merged.mount: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241619]: 2026-01-20 19:23:08.150985919 +0000 UTC m=+0.175249711 container remove bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 14:23:08 np0005589310 systemd[1]: libpod-conmon-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.294895885 +0000 UTC m=+0.036837298 container create a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:08 np0005589310 systemd[1]: Started libpod-conmon-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope.
Jan 20 14:23:08 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:08 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.279906923 +0000 UTC m=+0.021848336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.386156752 +0000 UTC m=+0.128098175 container init a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.397093486 +0000 UTC m=+0.139034899 container start a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.399901043 +0000 UTC m=+0.141842536 container attach a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]: {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    "0": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "devices": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "/dev/loop3"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            ],
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_name": "ceph_lv0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_size": "21470642176",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "name": "ceph_lv0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "tags": {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_name": "ceph",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.crush_device_class": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.encrypted": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.objectstore": "bluestore",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_id": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.vdo": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.with_tpm": "0"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            },
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "vg_name": "ceph_vg0"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        }
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    ],
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    "1": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "devices": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "/dev/loop4"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            ],
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_name": "ceph_lv1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_size": "21470642176",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "name": "ceph_lv1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "tags": {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_name": "ceph",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.crush_device_class": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.encrypted": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.objectstore": "bluestore",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_id": "1",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.vdo": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.with_tpm": "0"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            },
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "vg_name": "ceph_vg1"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        }
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    ],
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    "2": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "devices": [
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "/dev/loop5"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            ],
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_name": "ceph_lv2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_size": "21470642176",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "name": "ceph_lv2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "tags": {
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.cluster_name": "ceph",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.crush_device_class": "",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.encrypted": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.objectstore": "bluestore",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osd_id": "2",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.vdo": "0",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:                "ceph.with_tpm": "0"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            },
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "type": "block",
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:            "vg_name": "ceph_vg2"
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:        }
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]:    ]
Jan 20 14:23:08 np0005589310 hungry_hofstadter[241675]: }
Jan 20 14:23:08 np0005589310 systemd[1]: libpod-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.681088075 +0000 UTC m=+0.423029488 container died a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:08 np0005589310 systemd[1]: var-lib-containers-storage-overlay-18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d-merged.mount: Deactivated successfully.
Jan 20 14:23:08 np0005589310 podman[241659]: 2026-01-20 19:23:08.721451007 +0000 UTC m=+0.463392420 container remove a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:23:08 np0005589310 systemd[1]: libpod-conmon-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope: Deactivated successfully.
Jan 20 14:23:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.134602556 +0000 UTC m=+0.035496797 container create 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:23:09 np0005589310 systemd[1]: Started libpod-conmon-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope.
Jan 20 14:23:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.208186117 +0000 UTC m=+0.109080378 container init 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.119036141 +0000 UTC m=+0.019930402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.214774916 +0000 UTC m=+0.115669157 container start 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.217983753 +0000 UTC m=+0.118878024 container attach 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:23:09 np0005589310 angry_sanderson[241775]: 167 167
Jan 20 14:23:09 np0005589310 systemd[1]: libpod-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope: Deactivated successfully.
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.219409267 +0000 UTC m=+0.120303528 container died 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:23:09 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1041fb1858d3b0bc284f5ca155323af21c7505e692b2ad40879ecd05c84fcab8-merged.mount: Deactivated successfully.
Jan 20 14:23:09 np0005589310 podman[241759]: 2026-01-20 19:23:09.253455708 +0000 UTC m=+0.154349949 container remove 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:23:09 np0005589310 systemd[1]: libpod-conmon-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope: Deactivated successfully.
Jan 20 14:23:09 np0005589310 podman[241799]: 2026-01-20 19:23:09.404853193 +0000 UTC m=+0.039312257 container create 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 14:23:09 np0005589310 systemd[1]: Started libpod-conmon-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope.
Jan 20 14:23:09 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:23:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:09 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:23:09 np0005589310 podman[241799]: 2026-01-20 19:23:09.480604698 +0000 UTC m=+0.115063782 container init 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:23:09 np0005589310 podman[241799]: 2026-01-20 19:23:09.386127362 +0000 UTC m=+0.020586426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:23:09 np0005589310 podman[241799]: 2026-01-20 19:23:09.492488013 +0000 UTC m=+0.126947067 container start 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:23:09 np0005589310 podman[241799]: 2026-01-20 19:23:09.495716922 +0000 UTC m=+0.130175976 container attach 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:23:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:10 np0005589310 lvm[241893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:23:10 np0005589310 lvm[241893]: VG ceph_vg0 finished
Jan 20 14:23:10 np0005589310 lvm[241894]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:23:10 np0005589310 lvm[241894]: VG ceph_vg1 finished
Jan 20 14:23:10 np0005589310 lvm[241896]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:23:10 np0005589310 lvm[241896]: VG ceph_vg2 finished
Jan 20 14:23:10 np0005589310 sleepy_babbage[241815]: {}
Jan 20 14:23:10 np0005589310 systemd[1]: libpod-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Deactivated successfully.
Jan 20 14:23:10 np0005589310 podman[241799]: 2026-01-20 19:23:10.357673547 +0000 UTC m=+0.992132621 container died 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 14:23:10 np0005589310 systemd[1]: libpod-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Consumed 1.302s CPU time.
Jan 20 14:23:10 np0005589310 systemd[1]: var-lib-containers-storage-overlay-7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8-merged.mount: Deactivated successfully.
Jan 20 14:23:10 np0005589310 podman[241799]: 2026-01-20 19:23:10.396472432 +0000 UTC m=+1.030931486 container remove 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 14:23:10 np0005589310 systemd[1]: libpod-conmon-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Deactivated successfully.
Jan 20 14:23:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:23:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:23:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:23:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:13 np0005589310 podman[241936]: 2026-01-20 19:23:13.428580517 +0000 UTC m=+0.098159775 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 14:23:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:15 np0005589310 podman[241962]: 2026-01-20 19:23:15.37795847 +0000 UTC m=+0.056056691 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 14:23:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:24 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:23:31 np0005589310 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5863 writes, 1003 syncs, 5.85 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 20 14:23:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:23:31
Jan 20 14:23:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:23:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:23:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data']
Jan 20 14:23:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:23:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:23:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:23:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:23:35 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7128 writes, 29K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7128 writes, 1427 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 20 14:23:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:23:42 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5637 writes, 24K keys, 5637 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5637 writes, 873 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 20 14:23:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:44 np0005589310 podman[241983]: 2026-01-20 19:23:44.409632193 +0000 UTC m=+0.088038051 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:23:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:23:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:46 np0005589310 podman[242009]: 2026-01-20 19:23:46.366477283 +0000 UTC m=+0.046219264 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 14:23:46 np0005589310 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 14:23:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:23:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:23:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:23:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:23:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.600261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030600314, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1500, "num_deletes": 251, "total_data_size": 2411145, "memory_usage": 2456784, "flush_reason": "Manual Compaction"}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030615632, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2377337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14863, "largest_seqno": 16362, "table_properties": {"data_size": 2370364, "index_size": 4044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14221, "raw_average_key_size": 19, "raw_value_size": 2356428, "raw_average_value_size": 3259, "num_data_blocks": 185, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936869, "oldest_key_time": 1768936869, "file_creation_time": 1768937030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15424 microseconds, and 5453 cpu microseconds.
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.615692) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2377337 bytes OK
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.615710) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617408) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617435) EVENT_LOG_v1 {"time_micros": 1768937030617431, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617452) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2404594, prev total WAL file size 2404594, number of live WAL files 2.
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.618092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2321KB)], [35(7181KB)]
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030618154, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9731233, "oldest_snapshot_seqno": -1}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4032 keys, 7908738 bytes, temperature: kUnknown
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030683784, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7908738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7879686, "index_size": 17870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98440, "raw_average_key_size": 24, "raw_value_size": 7804631, "raw_average_value_size": 1935, "num_data_blocks": 755, "num_entries": 4032, "num_filter_entries": 4032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768937030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.684519) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7908738 bytes
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.685792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 119.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4546, records dropped: 514 output_compression: NoCompression
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.685806) EVENT_LOG_v1 {"time_micros": 1768937030685799, "job": 16, "event": "compaction_finished", "compaction_time_micros": 66155, "compaction_time_cpu_micros": 17296, "output_level": 6, "num_output_files": 1, "total_output_size": 7908738, "num_input_records": 4546, "num_output_records": 4032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030686255, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030687563, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.618029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:50 np0005589310 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 14:23:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:23:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.091 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.115 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:57 np0005589310 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.701 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.701 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:23:58 np0005589310 nova_compute[239038]: 2026-01-20 19:23:58.702 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:23:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.705 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.705 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:24:00 np0005589310 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:24:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:24:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715516207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.214 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.352 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.353 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5162MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.353 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.354 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.402 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.403 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.415 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:24:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:24:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999436920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.956 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.961 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.977 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.978 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:24:01 np0005589310 nova_compute[239038]: 2026-01-20 19:24:01.978 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:24:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.450 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:24:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.451 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:24:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.451 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:24:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:24:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:10 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:24:10 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:11 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.062380249 +0000 UTC m=+0.040520677 container create b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:12 np0005589310 systemd[1]: Started libpod-conmon-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope.
Jan 20 14:24:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.13346871 +0000 UTC m=+0.111609158 container init b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.047130952 +0000 UTC m=+0.025271400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.148640585 +0000 UTC m=+0.126781013 container start b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.152004167 +0000 UTC m=+0.130144595 container attach b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:24:12 np0005589310 elated_maxwell[242302]: 167 167
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.15919766 +0000 UTC m=+0.137338088 container died b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:24:12 np0005589310 systemd[1]: libpod-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope: Deactivated successfully.
Jan 20 14:24:12 np0005589310 systemd[1]: var-lib-containers-storage-overlay-f2d3543d4146eef796ef3b5a92f240c8c0c40fd6965b9115a6c64e3a6fe559a2-merged.mount: Deactivated successfully.
Jan 20 14:24:12 np0005589310 podman[242285]: 2026-01-20 19:24:12.1998753 +0000 UTC m=+0.178015728 container remove b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:12 np0005589310 systemd[1]: libpod-conmon-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope: Deactivated successfully.
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.381524903 +0000 UTC m=+0.047117805 container create 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:12 np0005589310 systemd[1]: Started libpod-conmon-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope.
Jan 20 14:24:12 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.362464815 +0000 UTC m=+0.028057767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.471279215 +0000 UTC m=+0.136872147 container init 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.479812611 +0000 UTC m=+0.145405513 container start 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.4839166 +0000 UTC m=+0.149509502 container attach 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:24:12 np0005589310 loving_swartz[242341]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:24:12 np0005589310 loving_swartz[242341]: --> All data devices are unavailable
Jan 20 14:24:12 np0005589310 systemd[1]: libpod-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope: Deactivated successfully.
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.956022408 +0000 UTC m=+0.621615310 container died 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:12 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f-merged.mount: Deactivated successfully.
Jan 20 14:24:12 np0005589310 podman[242325]: 2026-01-20 19:24:12.99556254 +0000 UTC m=+0.661155442 container remove 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 14:24:13 np0005589310 systemd[1]: libpod-conmon-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope: Deactivated successfully.
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.406075705 +0000 UTC m=+0.037525995 container create 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 14:24:13 np0005589310 systemd[1]: Started libpod-conmon-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope.
Jan 20 14:24:13 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.476532162 +0000 UTC m=+0.107982462 container init 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.483956081 +0000 UTC m=+0.115406361 container start 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.389034855 +0000 UTC m=+0.020485165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.487113427 +0000 UTC m=+0.118563707 container attach 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:24:13 np0005589310 kind_goldberg[242451]: 167 167
Jan 20 14:24:13 np0005589310 systemd[1]: libpod-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope: Deactivated successfully.
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.490270813 +0000 UTC m=+0.121721153 container died 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:24:13 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3d91e8edc2fac5066a6ff7de221c49f344d9255be4a8a45fd2f6ec3a0c27c190-merged.mount: Deactivated successfully.
Jan 20 14:24:13 np0005589310 podman[242435]: 2026-01-20 19:24:13.532545521 +0000 UTC m=+0.163995821 container remove 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:13 np0005589310 systemd[1]: libpod-conmon-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope: Deactivated successfully.
Jan 20 14:24:13 np0005589310 podman[242475]: 2026-01-20 19:24:13.689551592 +0000 UTC m=+0.037668518 container create 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:24:13 np0005589310 systemd[1]: Started libpod-conmon-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope.
Jan 20 14:24:13 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:13 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:13 np0005589310 podman[242475]: 2026-01-20 19:24:13.759730372 +0000 UTC m=+0.107847318 container init 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:13 np0005589310 podman[242475]: 2026-01-20 19:24:13.765396728 +0000 UTC m=+0.113513654 container start 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:13 np0005589310 podman[242475]: 2026-01-20 19:24:13.767845027 +0000 UTC m=+0.115961953 container attach 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:24:13 np0005589310 podman[242475]: 2026-01-20 19:24:13.67327233 +0000 UTC m=+0.021389256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:14 np0005589310 nervous_colden[242492]: {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    "0": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "devices": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "/dev/loop3"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            ],
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_name": "ceph_lv0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_size": "21470642176",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "name": "ceph_lv0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "tags": {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_name": "ceph",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.crush_device_class": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.encrypted": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.objectstore": "bluestore",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_id": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.vdo": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.with_tpm": "0"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            },
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "vg_name": "ceph_vg0"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        }
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    ],
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    "1": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "devices": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "/dev/loop4"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            ],
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_name": "ceph_lv1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_size": "21470642176",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "name": "ceph_lv1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "tags": {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_name": "ceph",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.crush_device_class": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.encrypted": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.objectstore": "bluestore",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_id": "1",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.vdo": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.with_tpm": "0"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            },
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "vg_name": "ceph_vg1"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        }
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    ],
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    "2": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "devices": [
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "/dev/loop5"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            ],
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_name": "ceph_lv2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_size": "21470642176",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "name": "ceph_lv2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "tags": {
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.cluster_name": "ceph",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.crush_device_class": "",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.encrypted": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.objectstore": "bluestore",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osd_id": "2",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.vdo": "0",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:                "ceph.with_tpm": "0"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            },
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "type": "block",
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:            "vg_name": "ceph_vg2"
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:        }
Jan 20 14:24:14 np0005589310 nervous_colden[242492]:    ]
Jan 20 14:24:14 np0005589310 nervous_colden[242492]: }
Jan 20 14:24:14 np0005589310 systemd[1]: libpod-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope: Deactivated successfully.
Jan 20 14:24:14 np0005589310 podman[242475]: 2026-01-20 19:24:14.056061127 +0000 UTC m=+0.404178053 container died 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:14 np0005589310 systemd[1]: var-lib-containers-storage-overlay-1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7-merged.mount: Deactivated successfully.
Jan 20 14:24:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:14 np0005589310 podman[242475]: 2026-01-20 19:24:14.096553302 +0000 UTC m=+0.444670228 container remove 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:14 np0005589310 systemd[1]: libpod-conmon-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope: Deactivated successfully.
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.545735069 +0000 UTC m=+0.045918507 container create 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:24:14 np0005589310 systemd[1]: Started libpod-conmon-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope.
Jan 20 14:24:14 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.613721547 +0000 UTC m=+0.113904995 container init 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.52168083 +0000 UTC m=+0.021864308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.621231787 +0000 UTC m=+0.121415215 container start 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.625250204 +0000 UTC m=+0.125433632 container attach 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 14:24:14 np0005589310 bold_wu[242594]: 167 167
Jan 20 14:24:14 np0005589310 systemd[1]: libpod-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope: Deactivated successfully.
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.628087642 +0000 UTC m=+0.128271080 container died 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:24:14 np0005589310 systemd[1]: var-lib-containers-storage-overlay-da436940e6a717d6d99063ea972526cb48b80ad8dcaf62a6460ec128891441af-merged.mount: Deactivated successfully.
Jan 20 14:24:14 np0005589310 podman[242577]: 2026-01-20 19:24:14.668232989 +0000 UTC m=+0.168416417 container remove 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:24:14 np0005589310 podman[242591]: 2026-01-20 19:24:14.670270918 +0000 UTC m=+0.089871215 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 20 14:24:14 np0005589310 systemd[1]: libpod-conmon-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope: Deactivated successfully.
Jan 20 14:24:14 np0005589310 podman[242642]: 2026-01-20 19:24:14.837192927 +0000 UTC m=+0.043284463 container create e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 14:24:14 np0005589310 systemd[1]: Started libpod-conmon-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope.
Jan 20 14:24:14 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:24:14 np0005589310 podman[242642]: 2026-01-20 19:24:14.817222887 +0000 UTC m=+0.023314433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:24:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:14 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:24:14 np0005589310 podman[242642]: 2026-01-20 19:24:14.931470748 +0000 UTC m=+0.137562284 container init e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 14:24:14 np0005589310 podman[242642]: 2026-01-20 19:24:14.940023084 +0000 UTC m=+0.146114600 container start e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 14:24:14 np0005589310 podman[242642]: 2026-01-20 19:24:14.943351374 +0000 UTC m=+0.149443030 container attach e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 20 14:24:15 np0005589310 lvm[242738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:24:15 np0005589310 lvm[242739]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:24:15 np0005589310 lvm[242739]: VG ceph_vg1 finished
Jan 20 14:24:15 np0005589310 lvm[242738]: VG ceph_vg0 finished
Jan 20 14:24:15 np0005589310 lvm[242741]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:24:15 np0005589310 lvm[242741]: VG ceph_vg2 finished
Jan 20 14:24:15 np0005589310 nifty_neumann[242658]: {}
Jan 20 14:24:15 np0005589310 systemd[1]: libpod-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Deactivated successfully.
Jan 20 14:24:15 np0005589310 systemd[1]: libpod-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Consumed 1.245s CPU time.
Jan 20 14:24:15 np0005589310 podman[242642]: 2026-01-20 19:24:15.715867637 +0000 UTC m=+0.921959153 container died e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 20 14:24:15 np0005589310 systemd[1]: var-lib-containers-storage-overlay-b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650-merged.mount: Deactivated successfully.
Jan 20 14:24:15 np0005589310 podman[242642]: 2026-01-20 19:24:15.759245551 +0000 UTC m=+0.965337107 container remove e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 14:24:15 np0005589310 systemd[1]: libpod-conmon-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Deactivated successfully.
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:15 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:24:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:17 np0005589310 podman[242782]: 2026-01-20 19:24:17.383888493 +0000 UTC m=+0.056099691 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:24:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:24 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 14:24:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 14:24:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 14:24:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:24:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:24:31
Jan 20 14:24:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:24:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:24:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'images', 'volumes', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 20 14:24:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:24:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:24:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:24:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:24:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 20 14:24:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:24:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 14:24:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:24:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:24:45 np0005589310 podman[242801]: 2026-01-20 19:24:45.430190297 +0000 UTC m=+0.102143901 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 20 14:24:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:48 np0005589310 podman[242827]: 2026-01-20 19:24:48.407304939 +0000 UTC m=+0.070348336 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 14:24:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:24:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:24:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:24:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:24:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.703 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.705 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.705 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 20 14:24:54 np0005589310 nova_compute[239038]: 2026-01-20 19:24:54.720 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:55 np0005589310 nova_compute[239038]: 2026-01-20 19:24:55.733 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:57 np0005589310 nova_compute[239038]: 2026-01-20 19:24:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:24:58 np0005589310 nova_compute[239038]: 2026-01-20 19:24:58.677 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:58 np0005589310 nova_compute[239038]: 2026-01-20 19:24:58.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:58 np0005589310 nova_compute[239038]: 2026-01-20 19:24:58.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:24:58 np0005589310 nova_compute[239038]: 2026-01-20 19:24:58.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:24:58 np0005589310 nova_compute[239038]: 2026-01-20 19:24:58.696 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:24:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:24:59 np0005589310 nova_compute[239038]: 2026-01-20 19:24:59.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:59 np0005589310 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:59 np0005589310 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:59 np0005589310 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:24:59 np0005589310 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:25:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.707 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.707 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:25:00 np0005589310 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:25:01 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:25:01 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689520366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.269 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.429 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.622 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.623 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.700 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing inventories for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.765 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Updating ProviderTree inventory for provider 178956bf-6050-42b7-876f-3f96271cf4ff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.765 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.782 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing aggregate associations for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.805 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing trait associations for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 20 14:25:01 np0005589310 nova_compute[239038]: 2026-01-20 19:25:01.819 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:25:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:25:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318937435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:25:02 np0005589310 nova_compute[239038]: 2026-01-20 19:25:02.342 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:25:02 np0005589310 nova_compute[239038]: 2026-01-20 19:25:02.348 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:25:02 np0005589310 nova_compute[239038]: 2026-01-20 19:25:02.363 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:25:02 np0005589310 nova_compute[239038]: 2026-01-20 19:25:02.365 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:25:02 np0005589310 nova_compute[239038]: 2026-01-20 19:25:02.366 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:25:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.452 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:25:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:25:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:25:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:16 np0005589310 podman[242914]: 2026-01-20 19:25:16.055565689 +0000 UTC m=+0.081907952 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:25:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:25:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:25:16 np0005589310 podman[243059]: 2026-01-20 19:25:16.967788771 +0000 UTC m=+0.036190877 container create e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:25:17 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 14:25:17 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:25:17 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:17 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:25:17 np0005589310 systemd[1]: Started libpod-conmon-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope.
Jan 20 14:25:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:16.951799494 +0000 UTC m=+0.020201620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:17.053173927 +0000 UTC m=+0.121576033 container init e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:17.05988158 +0000 UTC m=+0.128283686 container start e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:17.063253991 +0000 UTC m=+0.131656127 container attach e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:17 np0005589310 sharp_kowalevski[243075]: 167 167
Jan 20 14:25:17 np0005589310 systemd[1]: libpod-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope: Deactivated successfully.
Jan 20 14:25:17 np0005589310 conmon[243075]: conmon e4d058f7ea15ff03fde6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope/container/memory.events
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:17.068442257 +0000 UTC m=+0.136844383 container died e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 14:25:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-e05b5868311faf661fc6089df37e4e96d2055abd2e58bde4bb8aa290f84923be-merged.mount: Deactivated successfully.
Jan 20 14:25:17 np0005589310 podman[243059]: 2026-01-20 19:25:17.110926394 +0000 UTC m=+0.179328500 container remove e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:17 np0005589310 systemd[1]: libpod-conmon-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope: Deactivated successfully.
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.258269419 +0000 UTC m=+0.037982160 container create fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 14:25:17 np0005589310 systemd[1]: Started libpod-conmon-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope.
Jan 20 14:25:17 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:17 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.328672483 +0000 UTC m=+0.108385234 container init fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.336797999 +0000 UTC m=+0.116510750 container start fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.243229576 +0000 UTC m=+0.022942337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.339670699 +0000 UTC m=+0.119383460 container attach fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:25:17 np0005589310 musing_bhaskara[243117]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:25:17 np0005589310 musing_bhaskara[243117]: --> All data devices are unavailable
Jan 20 14:25:17 np0005589310 systemd[1]: libpod-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope: Deactivated successfully.
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.815854801 +0000 UTC m=+0.595567562 container died fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 14:25:17 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb-merged.mount: Deactivated successfully.
Jan 20 14:25:17 np0005589310 podman[243100]: 2026-01-20 19:25:17.856970035 +0000 UTC m=+0.636682786 container remove fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 14:25:17 np0005589310 systemd[1]: libpod-conmon-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope: Deactivated successfully.
Jan 20 14:25:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.355730472 +0000 UTC m=+0.058205238 container create bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:18 np0005589310 systemd[1]: Started libpod-conmon-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope.
Jan 20 14:25:18 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.336638331 +0000 UTC m=+0.039113077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.44822815 +0000 UTC m=+0.150702906 container init bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.45937557 +0000 UTC m=+0.161850306 container start bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.463009078 +0000 UTC m=+0.165483804 container attach bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:25:18 np0005589310 magical_payne[243229]: 167 167
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.467015925 +0000 UTC m=+0.169490661 container died bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:18 np0005589310 systemd[1]: libpod-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope: Deactivated successfully.
Jan 20 14:25:18 np0005589310 systemd[1]: var-lib-containers-storage-overlay-df19cd1b1a96b1f354a43aa808ac093fa33c7466073ffaa43df74474b281a892-merged.mount: Deactivated successfully.
Jan 20 14:25:18 np0005589310 podman[243213]: 2026-01-20 19:25:18.507690499 +0000 UTC m=+0.210165225 container remove bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:18 np0005589310 systemd[1]: libpod-conmon-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope: Deactivated successfully.
Jan 20 14:25:18 np0005589310 podman[243232]: 2026-01-20 19:25:18.543786613 +0000 UTC m=+0.085697775 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 14:25:18 np0005589310 podman[243271]: 2026-01-20 19:25:18.673630734 +0000 UTC m=+0.040962872 container create 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:25:18 np0005589310 systemd[1]: Started libpod-conmon-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope.
Jan 20 14:25:18 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:18 np0005589310 podman[243271]: 2026-01-20 19:25:18.656459629 +0000 UTC m=+0.023791787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:18 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:18 np0005589310 podman[243271]: 2026-01-20 19:25:18.758278482 +0000 UTC m=+0.125610650 container init 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 14:25:18 np0005589310 podman[243271]: 2026-01-20 19:25:18.772735992 +0000 UTC m=+0.140068140 container start 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 14:25:18 np0005589310 podman[243271]: 2026-01-20 19:25:18.776043622 +0000 UTC m=+0.143375770 container attach 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 14:25:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]: {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    "0": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "devices": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "/dev/loop3"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            ],
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_name": "ceph_lv0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_size": "21470642176",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "name": "ceph_lv0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "tags": {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_name": "ceph",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.crush_device_class": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.encrypted": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.objectstore": "bluestore",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_id": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.vdo": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.with_tpm": "0"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            },
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "vg_name": "ceph_vg0"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        }
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    ],
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    "1": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "devices": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "/dev/loop4"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            ],
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_name": "ceph_lv1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_size": "21470642176",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "name": "ceph_lv1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "tags": {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_name": "ceph",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.crush_device_class": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.encrypted": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.objectstore": "bluestore",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_id": "1",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.vdo": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.with_tpm": "0"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            },
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "vg_name": "ceph_vg1"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        }
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    ],
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    "2": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "devices": [
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "/dev/loop5"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            ],
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_name": "ceph_lv2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_size": "21470642176",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "name": "ceph_lv2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "tags": {
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.cluster_name": "ceph",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.crush_device_class": "",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.encrypted": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.objectstore": "bluestore",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osd_id": "2",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.vdo": "0",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:                "ceph.with_tpm": "0"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            },
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "type": "block",
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:            "vg_name": "ceph_vg2"
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:        }
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]:    ]
Jan 20 14:25:19 np0005589310 nostalgic_blackwell[243288]: }
Jan 20 14:25:19 np0005589310 systemd[1]: libpod-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243297]: 2026-01-20 19:25:19.1376036 +0000 UTC m=+0.030783075 container died 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:25:19 np0005589310 systemd[1]: var-lib-containers-storage-overlay-42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2-merged.mount: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243297]: 2026-01-20 19:25:19.172744421 +0000 UTC m=+0.065923896 container remove 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:25:19 np0005589310 systemd[1]: libpod-conmon-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.627536144 +0000 UTC m=+0.037826516 container create 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 14:25:19 np0005589310 systemd[1]: Started libpod-conmon-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope.
Jan 20 14:25:19 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.61207898 +0000 UTC m=+0.022369372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.718821073 +0000 UTC m=+0.129111485 container init 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.730167067 +0000 UTC m=+0.140457439 container start 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.734018191 +0000 UTC m=+0.144308603 container attach 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 14:25:19 np0005589310 distracted_ritchie[243392]: 167 167
Jan 20 14:25:19 np0005589310 systemd[1]: libpod-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.73485535 +0000 UTC m=+0.145145712 container died 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:19 np0005589310 systemd[1]: var-lib-containers-storage-overlay-3ffdd10b4a27aec843de14b22ef3b9f083893be91e51eaee06eae81dd6b671b1-merged.mount: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243375]: 2026-01-20 19:25:19.773313932 +0000 UTC m=+0.183604284 container remove 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:19 np0005589310 systemd[1]: libpod-conmon-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope: Deactivated successfully.
Jan 20 14:25:19 np0005589310 podman[243416]: 2026-01-20 19:25:19.957938119 +0000 UTC m=+0.054296175 container create a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:25:19 np0005589310 systemd[1]: Started libpod-conmon-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope.
Jan 20 14:25:20 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:25:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:19.929745606 +0000 UTC m=+0.026103762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:25:20 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:20.035203338 +0000 UTC m=+0.131561444 container init a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:20.046131732 +0000 UTC m=+0.142489788 container start a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:20.049026862 +0000 UTC m=+0.145384918 container attach a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:25:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:20 np0005589310 lvm[243511]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:25:20 np0005589310 lvm[243511]: VG ceph_vg1 finished
Jan 20 14:25:20 np0005589310 lvm[243510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:25:20 np0005589310 lvm[243510]: VG ceph_vg0 finished
Jan 20 14:25:20 np0005589310 lvm[243513]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:25:20 np0005589310 lvm[243513]: VG ceph_vg2 finished
Jan 20 14:25:20 np0005589310 sharp_mendeleev[243432]: {}
Jan 20 14:25:20 np0005589310 systemd[1]: libpod-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Deactivated successfully.
Jan 20 14:25:20 np0005589310 systemd[1]: libpod-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Consumed 1.416s CPU time.
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:20.929434283 +0000 UTC m=+1.025792359 container died a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 14:25:20 np0005589310 systemd[1]: var-lib-containers-storage-overlay-29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b-merged.mount: Deactivated successfully.
Jan 20 14:25:20 np0005589310 podman[243416]: 2026-01-20 19:25:20.972623698 +0000 UTC m=+1.068981754 container remove a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 14:25:20 np0005589310 systemd[1]: libpod-conmon-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Deactivated successfully.
Jan 20 14:25:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:25:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:25:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:22 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:25:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:24 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:25:31
Jan 20 14:25:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:25:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:25:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Jan 20 14:25:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:25:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:25:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:25:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:25:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:25:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:46 np0005589310 podman[243554]: 2026-01-20 19:25:46.458262875 +0000 UTC m=+0.130471287 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 14:25:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:49 np0005589310 podman[243581]: 2026-01-20 19:25:49.391495045 +0000 UTC m=+0.072177706 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 20 14:25:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:25:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:25:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:25:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:25:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:25:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.379 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.380 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.381 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:25:59 np0005589310 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:26:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:00 np0005589310 nova_compute[239038]: 2026-01-20 19:26:00.676 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.717 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.717 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:26:01 np0005589310 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:26:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:26:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98757914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.219 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.388 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.389 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.389 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.390 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.444 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.444 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:26:02 np0005589310 nova_compute[239038]: 2026-01-20 19:26:02.458 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:26:02 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:26:02 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778637990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:26:03 np0005589310 nova_compute[239038]: 2026-01-20 19:26:03.018 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:26:03 np0005589310 nova_compute[239038]: 2026-01-20 19:26:03.025 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:26:03 np0005589310 nova_compute[239038]: 2026-01-20 19:26:03.044 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:26:03 np0005589310 nova_compute[239038]: 2026-01-20 19:26:03.047 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:26:03 np0005589310 nova_compute[239038]: 2026-01-20 19:26:03.047 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:26:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:26:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.454 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:26:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.454 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:26:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:17 np0005589310 podman[243644]: 2026-01-20 19:26:17.427139312 +0000 UTC m=+0.104489080 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 14:26:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:20 np0005589310 podman[243670]: 2026-01-20 19:26:20.399254523 +0000 UTC m=+0.075382015 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:22 np0005589310 podman[243786]: 2026-01-20 19:26:22.441423233 +0000 UTC m=+0.065188438 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:22 np0005589310 podman[243786]: 2026-01-20 19:26:22.54172916 +0000 UTC m=+0.165494365 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:26:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:24 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.255353651 +0000 UTC m=+0.038765829 container create 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:26:24 np0005589310 systemd[1]: Started libpod-conmon-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope.
Jan 20 14:26:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.33343844 +0000 UTC m=+0.116850648 container init 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.239380914 +0000 UTC m=+0.022793112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.33918665 +0000 UTC m=+0.122598828 container start 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.342655583 +0000 UTC m=+0.126067761 container attach 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 14:26:24 np0005589310 admiring_swirles[244129]: 167 167
Jan 20 14:26:24 np0005589310 systemd[1]: libpod-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope: Deactivated successfully.
Jan 20 14:26:24 np0005589310 conmon[244129]: conmon 517305a995d3d09ec60c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope/container/memory.events
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.345155604 +0000 UTC m=+0.128567802 container died 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:26:24 np0005589310 systemd[1]: var-lib-containers-storage-overlay-a90d5d639e583dfee5fe97d90d244603e43dacdea7f249203ae3240dc2ae22bb-merged.mount: Deactivated successfully.
Jan 20 14:26:24 np0005589310 podman[244113]: 2026-01-20 19:26:24.437137689 +0000 UTC m=+0.220549867 container remove 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:26:24 np0005589310 systemd[1]: libpod-conmon-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope: Deactivated successfully.
Jan 20 14:26:24 np0005589310 podman[244153]: 2026-01-20 19:26:24.576756757 +0000 UTC m=+0.036691979 container create cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:26:24 np0005589310 systemd[1]: Started libpod-conmon-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope.
Jan 20 14:26:24 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:24 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:24 np0005589310 podman[244153]: 2026-01-20 19:26:24.64754983 +0000 UTC m=+0.107485062 container init cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:26:24 np0005589310 podman[244153]: 2026-01-20 19:26:24.654825626 +0000 UTC m=+0.114760858 container start cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 14:26:24 np0005589310 podman[244153]: 2026-01-20 19:26:24.562452411 +0000 UTC m=+0.022387663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:24 np0005589310 podman[244153]: 2026-01-20 19:26:24.658449394 +0000 UTC m=+0.118384626 container attach cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 14:26:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:24 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 14:26:25 np0005589310 elastic_sanderson[244171]: --> passed data devices: 0 physical, 3 LVM
Jan 20 14:26:25 np0005589310 elastic_sanderson[244171]: --> All data devices are unavailable
Jan 20 14:26:25 np0005589310 systemd[1]: libpod-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244191]: 2026-01-20 19:26:25.160867159 +0000 UTC m=+0.024151575 container died cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 14:26:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf-merged.mount: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244191]: 2026-01-20 19:26:25.195569159 +0000 UTC m=+0.058853545 container remove cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 14:26:25 np0005589310 systemd[1]: libpod-conmon-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.617755225 +0000 UTC m=+0.038234877 container create 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 14:26:25 np0005589310 systemd[1]: Started libpod-conmon-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope.
Jan 20 14:26:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.682223104 +0000 UTC m=+0.102702776 container init 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.687380198 +0000 UTC m=+0.107859850 container start 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.69071063 +0000 UTC m=+0.111190282 container attach 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 14:26:25 np0005589310 condescending_hodgkin[244285]: 167 167
Jan 20 14:26:25 np0005589310 systemd[1]: libpod-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.692104613 +0000 UTC m=+0.112584265 container died 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.601238805 +0000 UTC m=+0.021718467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:25 np0005589310 systemd[1]: var-lib-containers-storage-overlay-424d49f4c66655cb6d10ff89431f2c9fa980ace198d12db723d0630e02f0d00e-merged.mount: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244269]: 2026-01-20 19:26:25.726658079 +0000 UTC m=+0.147137741 container remove 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 14:26:25 np0005589310 systemd[1]: libpod-conmon-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope: Deactivated successfully.
Jan 20 14:26:25 np0005589310 podman[244308]: 2026-01-20 19:26:25.870455578 +0000 UTC m=+0.037337714 container create 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 14:26:25 np0005589310 systemd[1]: Started libpod-conmon-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope.
Jan 20 14:26:25 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:25 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:25 np0005589310 podman[244308]: 2026-01-20 19:26:25.949018559 +0000 UTC m=+0.115900695 container init 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 14:26:25 np0005589310 podman[244308]: 2026-01-20 19:26:25.854335509 +0000 UTC m=+0.021217655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:25 np0005589310 podman[244308]: 2026-01-20 19:26:25.96102765 +0000 UTC m=+0.127909786 container start 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 14:26:25 np0005589310 podman[244308]: 2026-01-20 19:26:25.964642427 +0000 UTC m=+0.131524583 container attach 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:26:26 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:26 np0005589310 frosty_edison[244324]: {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    "0": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "devices": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "/dev/loop3"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            ],
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_name": "ceph_lv0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_size": "21470642176",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "name": "ceph_lv0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "tags": {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_name": "ceph",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.crush_device_class": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.encrypted": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.objectstore": "bluestore",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_id": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.vdo": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.with_tpm": "0"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            },
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "vg_name": "ceph_vg0"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        }
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    ],
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    "1": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "devices": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "/dev/loop4"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            ],
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_name": "ceph_lv1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_size": "21470642176",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "name": "ceph_lv1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "tags": {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_name": "ceph",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.crush_device_class": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.encrypted": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.objectstore": "bluestore",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_id": "1",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.vdo": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.with_tpm": "0"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            },
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "vg_name": "ceph_vg1"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        }
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    ],
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    "2": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "devices": [
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "/dev/loop5"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            ],
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_name": "ceph_lv2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_size": "21470642176",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "name": "ceph_lv2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "tags": {
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cephx_lockbox_secret": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.cluster_name": "ceph",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.crush_device_class": "",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.encrypted": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.objectstore": "bluestore",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osd_id": "2",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.vdo": "0",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:                "ceph.with_tpm": "0"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            },
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "type": "block",
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:            "vg_name": "ceph_vg2"
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:        }
Jan 20 14:26:26 np0005589310 frosty_edison[244324]:    ]
Jan 20 14:26:26 np0005589310 frosty_edison[244324]: }
Jan 20 14:26:26 np0005589310 systemd[1]: libpod-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope: Deactivated successfully.
Jan 20 14:26:26 np0005589310 podman[244308]: 2026-01-20 19:26:26.2557249 +0000 UTC m=+0.422607036 container died 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 14:26:26 np0005589310 systemd[1]: var-lib-containers-storage-overlay-fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf-merged.mount: Deactivated successfully.
Jan 20 14:26:26 np0005589310 podman[244308]: 2026-01-20 19:26:26.292574992 +0000 UTC m=+0.459457128 container remove 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 14:26:26 np0005589310 systemd[1]: libpod-conmon-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope: Deactivated successfully.
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.709773126 +0000 UTC m=+0.038876652 container create 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 20 14:26:26 np0005589310 systemd[1]: Started libpod-conmon-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope.
Jan 20 14:26:26 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.776402038 +0000 UTC m=+0.105505574 container init 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.782390262 +0000 UTC m=+0.111493788 container start 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.785772455 +0000 UTC m=+0.114875981 container attach 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 14:26:26 np0005589310 vibrant_cartwright[244422]: 167 167
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.692518018 +0000 UTC m=+0.021621564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:26 np0005589310 systemd[1]: libpod-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope: Deactivated successfully.
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.78766271 +0000 UTC m=+0.116766236 container died 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 14:26:26 np0005589310 systemd[1]: var-lib-containers-storage-overlay-4884514c1c9217a4593aa466dce71895e4a53665b02c159efe74d4706ac11087-merged.mount: Deactivated successfully.
Jan 20 14:26:26 np0005589310 podman[244406]: 2026-01-20 19:26:26.823851716 +0000 UTC m=+0.152955242 container remove 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:26 np0005589310 systemd[1]: libpod-conmon-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope: Deactivated successfully.
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.013646088 +0000 UTC m=+0.053729631 container create 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 14:26:27 np0005589310 systemd[1]: Started libpod-conmon-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope.
Jan 20 14:26:27 np0005589310 systemd[1]: Started libcrun container.
Jan 20 14:26:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:27 np0005589310 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.077149675 +0000 UTC m=+0.117233238 container init 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:26.985276211 +0000 UTC m=+0.025359854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.084872442 +0000 UTC m=+0.124955985 container start 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.087783322 +0000 UTC m=+0.127866875 container attach 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 14:26:27 np0005589310 lvm[244541]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:26:27 np0005589310 lvm[244541]: VG ceph_vg0 finished
Jan 20 14:26:27 np0005589310 lvm[244542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:26:27 np0005589310 lvm[244542]: VG ceph_vg1 finished
Jan 20 14:26:27 np0005589310 lvm[244544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:26:27 np0005589310 lvm[244544]: VG ceph_vg2 finished
Jan 20 14:26:27 np0005589310 sad_gagarin[244463]: {}
Jan 20 14:26:27 np0005589310 systemd[1]: libpod-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Deactivated successfully.
Jan 20 14:26:27 np0005589310 systemd[1]: libpod-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Consumed 1.277s CPU time.
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.9060599 +0000 UTC m=+0.946143473 container died 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 20 14:26:27 np0005589310 systemd[1]: var-lib-containers-storage-overlay-284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a-merged.mount: Deactivated successfully.
Jan 20 14:26:27 np0005589310 podman[244446]: 2026-01-20 19:26:27.951625043 +0000 UTC m=+0.991708586 container remove 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 14:26:27 np0005589310 systemd[1]: libpod-conmon-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Deactivated successfully.
Jan 20 14:26:27 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 14:26:28 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 14:26:28 np0005589310 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:28 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:29 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:29 np0005589310 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 14:26:30 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:26:31
Jan 20 14:26:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 14:26:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 14:26:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root']
Jan 20 14:26:31 np0005589310 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 14:26:32 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:33 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:26:34 np0005589310 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 14:26:36 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:38 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:38 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:40 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:42 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:43 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 14:26:44 np0005589310 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 14:26:46 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:48 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:48 np0005589310 podman[244584]: 2026-01-20 19:26:48.441549768 +0000 UTC m=+0.107851940 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:26:48 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 14:26:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 14:26:49 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 14:26:49 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 14:26:50 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:51 np0005589310 podman[244611]: 2026-01-20 19:26:51.403894253 +0000 UTC m=+0.074336960 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 14:26:52 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:53 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:26:54 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:56 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:58 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:26:58 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.048 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.049 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.049 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:00 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.678 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.682 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 20 14:27:00 np0005589310 nova_compute[239038]: 2026-01-20 19:27:00.821 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 20 14:27:01 np0005589310 systemd-logind[797]: New session 52 of user zuul.
Jan 20 14:27:01 np0005589310 systemd[1]: Started Session 52 of User zuul.
Jan 20 14:27:01 np0005589310 nova_compute[239038]: 2026-01-20 19:27:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:01 np0005589310 nova_compute[239038]: 2026-01-20 19:27:01.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:02 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.722 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 20 14:27:02 np0005589310 nova_compute[239038]: 2026-01-20 19:27:02.722 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:27:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:27:03 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704669262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.265 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.422 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.423 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.423 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.424 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.533 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.533 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 20 14:27:03 np0005589310 nova_compute[239038]: 2026-01-20 19:27:03.595 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 20 14:27:03 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14392 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:03 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:27:04 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 14:27:04 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032183059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 14:27:04 np0005589310 nova_compute[239038]: 2026-01-20 19:27:04.108 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 20 14:27:04 np0005589310 nova_compute[239038]: 2026-01-20 19:27:04.113 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 20 14:27:04 np0005589310 nova_compute[239038]: 2026-01-20 19:27:04.139 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 20 14:27:04 np0005589310 nova_compute[239038]: 2026-01-20 19:27:04.141 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 20 14:27:04 np0005589310 nova_compute[239038]: 2026-01-20 19:27:04.142 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 14:27:04 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14396 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:05 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 14:27:05 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898834180' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 14:27:05 np0005589310 nova_compute[239038]: 2026-01-20 19:27:05.142 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 20 14:27:05 np0005589310 nova_compute[239038]: 2026-01-20 19:27:05.142 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 20 14:27:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 20 14:27:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 20 14:27:05 np0005589310 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 20 14:27:06 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:08 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:08 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:27:10 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:10 np0005589310 ovs-vsctl[245006]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 14:27:11 np0005589310 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 14:27:11 np0005589310 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 14:27:11 np0005589310 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 14:27:11 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: cache status {prefix=cache status} (starting...)
Jan 20 14:27:11 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: client ls {prefix=client ls} (starting...)
Jan 20 14:27:12 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:12 np0005589310 lvm[245355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 14:27:12 np0005589310 lvm[245355]: VG ceph_vg0 finished
Jan 20 14:27:12 np0005589310 lvm[245358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 14:27:12 np0005589310 lvm[245358]: VG ceph_vg2 finished
Jan 20 14:27:12 np0005589310 lvm[245366]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 14:27:12 np0005589310 lvm[245366]: VG ceph_vg1 finished
Jan 20 14:27:12 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:12 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 14:27:12 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14402 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:12 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14404 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 14:27:13 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430812925' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14408 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:13 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:13.794+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 14:27:13 np0005589310 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 14:27:13 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 14:27:13 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961119628' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 14:27:14 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:14 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: ops {prefix=ops} (starting...)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751632128' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761661735' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 20 14:27:14 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: session ls {prefix=session ls} (starting...)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 14:27:14 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770531288' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 14:27:15 np0005589310 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: status {prefix=status} (starting...)
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717942052' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715054790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 14:27:15 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14422 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:15 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14426 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 14:27:15 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813597264' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 14:27:16 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 14:27:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1298277820' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 20 14:27:16 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 14:27:16 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572877461' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199391415' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673053243' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 14:27:17 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:17 np0005589310 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 14:27:17 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:17.538+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 14:27:17 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409765975' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 14:27:18 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042601415' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 20 14:27:18 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:18 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x42c2c/0x8d000, compress 0x0/0x0/0x0, omap 0x5f5b, meta 0x1a2a0a5), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 53 handle_osd_map epochs [53,54], i have 54, src has [1,54]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931865 2 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989015 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931968 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931937 2 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988297 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988035 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003315 2 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007934 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003038 2 0.000040
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007306 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001881 2 0.000041
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007109 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932421 2 0.000026
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988290 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932518 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987898 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932030 2 0.000023
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982515 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933010 2 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988595 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932195 2 0.000029
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931463 2 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979825 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932392 2 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983629 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983868 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933110 2 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986979 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931819 2 0.001242
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985734 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932674 2 0.000383
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985556 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933097 2 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933062 2 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985067 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932051 2 0.000079
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982306 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003630 2 0.000758
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008051 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932650 2 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985482 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933739 2 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003716 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986836 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007929 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933760 2 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933163 2 0.000068
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986617 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983442 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933440 2 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985215 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933205 2 0.000135
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983432 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933169 2 0.000089
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984367 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003645 2 0.000038
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007855 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933199 2 0.000027
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983141 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933133 2 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981694 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933093 2 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981930 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932293 2 0.001040
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982586 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932640 2 0.000025
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978172 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932985 2 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980701 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933206 2 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981211 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932699 2 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981763 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932948 2 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980305 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932939 2 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980591 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933183 2 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980110 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933304 2 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980354 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004722 2 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933127 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008269 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979284 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933288 2 0.000029
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979539 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933337 2 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979890 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933576 2 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980242 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005009 2 0.000026
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007318 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.934966 2 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.987205 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935496 2 0.000028
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935445 2 0.000075
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990042 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005302 2 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007299 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990448 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935686 2 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991053 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006252 3 0.000166
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006355 3 0.000311
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006243 3 0.000086
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000074 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.998431 7 0.000118
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007501 3 0.001453
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.999121 7 0.000263
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.000466 7 0.000159
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.999243 7 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009263 3 0.000181
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009489 4 0.000163
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009110 3 0.000155
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009373 3 0.000278
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008629 3 0.000127
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008757 3 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008794 3 0.000161
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008670 3 0.000218
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009234 3 0.000392
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009075 3 0.000092
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009082 3 0.000067
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009110 3 0.000077
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008781 3 0.000180
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008840 3 0.000144
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008854 3 0.000091
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008843 3 0.000204
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008860 3 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008902 3 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008984 3 0.000516
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.001402 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.001528 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009831 3 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009832 3 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009739 3 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009733 3 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009822 3 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009836 4 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009814 3 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009799 3 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009733 3 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009757 3 0.000092
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009829 3 0.000038
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009703 3 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009748 3 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009511 3 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009505 3 0.000149
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009469 3 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009474 3 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009466 3 0.000130
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009414 3 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.016080 7 0.000080
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.003360 7 0.000129
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010099 3 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009596 3 0.000070
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009620 3 0.000321
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009571 3 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009251 4 0.000070
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009247 3 0.000298
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009288 3 0.000322
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009135 3 0.000278
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010437 3 0.001291
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.010002 4 0.000629
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000682 1 0.000067
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027700 7 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037426 7 0.000054
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033234 7 0.000081
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036610 7 0.000077
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039181 7 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031779 7 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029392 7 0.000161
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029529 7 0.000256
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032488 7 0.000084
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032003 7 0.000091
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036221 7 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032638 7 0.000304
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035516 7 0.000178
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030568 7 0.000122
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034830 7 0.000078
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032503 7 0.000277
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031496 7 0.000081
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034311 7 0.000173
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034501 7 0.000111
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036260 7 0.000144
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034468 7 0.005704
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040245 7 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030372 7 0.000326
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037972 7 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034885 7 0.000551
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030449 7 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036949 7 0.000094
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043846 7 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041646 7 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043313 7 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042737 7 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042576 7 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040934 7 0.000107
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040896 7 0.000051
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039113 7 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040166 7 0.000111
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042453 7 0.000114
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033940 7 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036484 7 0.000138
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039106 7 0.000102
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036641 7 0.000275
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037257 7 0.000097
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036019 7 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038111 7 0.000097
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037206 7 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038906 7 0.005878
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038419 7 0.000299
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038243 7 0.000144
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036511 7 0.000709
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036137 7 0.000068
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.044796 7 0.000240
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035189 7 0.000086
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035023 7 0.000091
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040408 7 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035849 7 0.000069
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036095 7 0.000070
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042259 7 0.003978
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035587 7 0.000261
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034287 7 0.000521
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074794 2 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.058803 1 0.000090
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050990 1 0.000085
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049658 1 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049533 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049550 1 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049591 1 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049635 1 0.000026
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049752 1 0.000141
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049735 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049801 1 0.000175
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049782 1 0.000027
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049825 1 0.000028
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049860 1 0.000025
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049873 1 0.000023
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049909 1 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049958 1 0.000022
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049957 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049983 1 0.000020
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050032 1 0.000251
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050160 1 0.000124
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050086 1 0.000134
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050180 1 0.000038
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050222 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050263 1 0.000020
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050304 1 0.000112
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050339 1 0.000029
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050369 1 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046484 1 0.000031
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046520 1 0.000028
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046533 1 0.000090
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046602 1 0.000017
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046578 1 0.000025
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046627 1 0.000025
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046689 1 0.000023
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046718 1 0.000029
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046800 1 0.000018
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046794 1 0.000024
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046863 1 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046862 1 0.000023
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046931 1 0.000077
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047076 1 0.000362
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047017 1 0.000132
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047047 1 0.000172
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047106 1 0.000163
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047108 1 0.000281
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046968 1 0.000022
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047001 1 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047050 1 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047111 1 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047134 1 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047193 1 0.000021
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047228 1 0.000040
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047260 1 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047314 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047384 1 0.000208
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047441 1 0.000022
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047490 1 0.000025
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047532 1 0.000132
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.045517 1 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007707 1 0.000143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066638 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.094398 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014757 1 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065808 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.103295 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.104083 2 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 0.104119 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000282 1 0.000122
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022130 1 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071839 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.105158 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030044 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.079626 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.118845 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036896 1 0.000063
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.086514 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.118330 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044245 1 0.000041
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093880 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.123412 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051605 1 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.101287 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.130848 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486703 data_alloc: 218103808 data_used: 1361
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058882 1 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.108696 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.145346 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007978 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057826 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090019 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.008118 1 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057932 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090462 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007936 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057851 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.088502 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007977 1 0.000038
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057910 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.093502 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.008182 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.058004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090721 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012571 1 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062493 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.095049 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012589 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012532 1 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062534 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012493 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012567 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.098858 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062604 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062503 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.097054 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062595 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.097511 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.094140 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017658 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067950 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.104252 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017559 1 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067869 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108153 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017735 1 0.000042
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067889 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.102585 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017801 1 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068117 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.102678 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017995 1 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068257 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.098720 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019658 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069982 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019575 1 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019528 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069933 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019649 1 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069967 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.106936 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069985 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.105002 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.100460 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108025 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019599 1 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.066134 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.110068 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023359 1 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069962 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023334 1 0.000040
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.113375 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069975 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.112745 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023476 1 0.000036
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070042 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.111739 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023410 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070076 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.111114 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023619 1 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070427 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111377 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026443 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073169 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.112327 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026407 1 0.000062
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073176 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.113391 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026317 1 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073157 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.109687 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026444 1 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073290 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.115782 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026342 1 0.000038
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073303 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.107299 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028187 1 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075104 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111991 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028088 1 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075305 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.117970 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028188 1 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075182 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114357 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028082 1 0.000051
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075191 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028178 1 0.000042
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075227 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.112554 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111307 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029594 1 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076729 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114884 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029573 1 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029452 1 0.000063
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076740 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076500 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114112 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114805 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029786 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076817 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.115606 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039332 1 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086502 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.122673 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039310 1 0.000032
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086489 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.131332 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039539 1 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086653 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.123236 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039423 1 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086678 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.121727 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039503 1 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086724 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.121953 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.041908 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089440 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.131748 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042281 1 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089645 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.125545 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042414 1 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089713 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.130160 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042814 1 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.090415 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.129760 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.043008 1 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.090481 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.126609 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.096850 4 0.000054
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.144450 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.180285 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.096846 4 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.142464 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.176808 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 1.085582 5 0.000162
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 1.085939 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.188582 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.316732 5 0.000059
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.316771 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.315560 5 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.315611 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000134 1 0.000078
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000201 1 0.000122
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.046879 2 0.000326
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.047130 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.363088 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.048654 2 0.000228
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.048928 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.363859 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x44dcf/0x91000, compress 0x0/0x0/0x0, omap 0x61e6, meta 0x1a29e1a), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.511136 5 0.000092
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.511211 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000143 1 0.000142
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.029850 2 0.000375
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.030158 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.541925 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.675452 5 0.000051
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.675520 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000165 1 0.000131
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.680923 5 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.680961 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000089
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.010742 2 0.000248
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.011005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.689955 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.019805 2 0.000248
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.019981 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.717065 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1089536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.192572594s of 10.370075226s, submitted: 647
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 1073152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 405066 data_alloc: 218103808 data_used: 1361
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x4be12/0x9d000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411400 data_alloc: 218103808 data_used: 1873
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x4db4c/0xa0000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.423548698s of 10.460992813s, submitted: 13
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423595 data_alloc: 218103808 data_used: 2872
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 62 handle_osd_map epochs [63,64], i have 62, src has [1,64]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440770 data_alloc: 218103808 data_used: 2872
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000124 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000049
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000186
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000056 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000319 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000125 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000200 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000112 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000137 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000334 1 0.000097
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000413 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.327224 2 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.327495 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.327547 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.328313 2 0.000157
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.328682 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.328796 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000089 1 0.000110
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000262 1 0.000363
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000059 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.328006 2 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.328257 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.328321 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.327025 2 0.000219
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.327491 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.327541 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000134 1 0.000464
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000169 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000246 1 0.000338
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000023 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [62,66], i have 66, src has [1,66]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000091 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000050 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000278 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=0 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=0 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000202 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000195 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000129 1 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000197 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000108 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000245
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000091 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000254 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.055888 6 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.054530 6 0.000286
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.055822 6 0.000186
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.054784 6 0.000131
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003650 3 0.000476
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000128 1 0.000069
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043858 1 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.048241 3 0.000204
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000289 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.032815 1 0.000032
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.081460 3 0.000121
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000049 1 0.000044
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059666 1 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.141263 3 0.000278
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000057 1 0.000034
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045710 1 0.000018
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe114000/0x0/0x4ffc00000, data 0x582b7/0xb2000, compress 0x0/0x0/0x0, omap 0x7b54, meta 0x1a284ac), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.858271 1 0.000027
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999562 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055501 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.918346 1 0.000028
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999785 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055723 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001094 2 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001360 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000141 1 0.000192
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001397 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000087 1 0.000121
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000224
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002480 2 0.000162
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.813610 1 0.000026
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002808 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.000815 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001624 2 0.000081
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002915 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001856 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001889 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055570 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000047 1 0.000171
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000211 1 0.000250
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001430 2 0.000134
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001721 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001882 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.953263 1 0.000028
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.000990 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055872 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000051 1 0.000071
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000079 1 0.000110
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000083 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000804 1 0.000838
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002031 2 0.000071
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002145 2 0.000078
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000326 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001842 2 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001271 2 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000024 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001269 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000524 2 0.000080
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001890 2 0.000135
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000978 2 0.000070
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 1761280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.942283630s of 10.073143959s, submitted: 77
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996403 2 0.000143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999954 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996606 2 0.000148
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997226 2 0.000138
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999696 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996347 2 0.000134
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999313 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004035 3 0.000144
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.005327 6 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005030 6 0.000054
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.006557 6 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004453 6 0.000980
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006257 3 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006440 3 0.000331
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000032 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007486 3 0.000198
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006571 3 0.000164
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000073
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059291 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.065893 3 0.000233
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000124 1 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031668 1 0.000159
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.098044 3 0.000247
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000101 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6400 session 0x5564eea92a80
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6800 session 0x5564eea93500
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6000 session 0x5564eea92540
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.040397 1 0.000131
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138408 3 0.000119
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000118 1 0.000149
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.053319 1 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 1720320 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f7000 session 0x5564ee4fae00
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ee511c00 session 0x5564ed84ba40
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6c00 session 0x5564ee52c700
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10a000/0x0/0x4ffc00000, data 0x5bf0e/0xbe000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.861020 1 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started/ReplicaActive 1.053045 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started 2.058343 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.914812 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.053562 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000137
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.987614 1 0.000093
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive 1.053672 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started 2.059043 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.058815 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000075
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Reset 0.000121 1 0.000157
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000158 1 0.000384
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.955952 1 0.000121
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive 1.053860 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started 2.060458 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Reset 0.000073 1 0.000121
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000233
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000112 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000791
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001811 3 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002150 3 0.000121
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001869 3 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001265 3 0.000150
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1564672 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548195 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fe0fc000/0x0/0x4ffc00000, data 0x5fa86/0xce000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009922 2 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011906 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010350 2 0.000380
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012506 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010755 2 0.000109
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013051 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011077 2 0.000106
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012559 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002549 3 0.000291
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003291 3 0.000190
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003898 3 0.000198
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004415 3 0.000479
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1556480 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1556480 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 1490944 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x614d5/0xd1000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=0 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=0 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000221
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001134 2 0.000086
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 1466368 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 562242 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000254 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000169 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.961079 2 0.000126
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.962510 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.509527 2 0.000095
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.509838 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.509868 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000271 1 0.000372
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000052 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.510407 2 0.000082
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.510689 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.510742 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000535 1 0.000700
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000176 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004462 4 0.000357
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [72,73], i have 73, src has [1,73]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.001896 6 0.000530
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004182 6 0.000161
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003029 3 0.000193
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000066 1 0.000085
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.046009 1 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.048794 3 0.000413
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000073 1 0.000102
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052550 1 0.000043
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 1572864 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.913020 1 0.000030
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.014550 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.018864 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.965877 1 0.000081
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 1.015155 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000073 1 0.000107
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.017325 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000042
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000110 1 0.000159
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001194 3 0.000132
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001445 3 0.000077
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1523712 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.867558479s of 10.107018471s, submitted: 142
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 74 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993522 2 0.000178
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995136 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994486 2 0.000106
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995916 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002418 3 0.000419
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002876 3 0.000203
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x686c7/0xe1000, compress 0x0/0x0/0x0, omap 0x9237, meta 0x1a26dc9), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 606362 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1474560 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1474560 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fe0e8000/0x0/0x4ffc00000, data 0x6a116/0xe4000, compress 0x0/0x0/0x0, omap 0x94c2, meta 0x1a26b3e), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1376256 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621877 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fe0de000/0x0/0x4ffc00000, data 0x6d89f/0xea000, compress 0x0/0x0/0x0, omap 0x99d8, meta 0x1a26628), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1310720 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 627885 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.020220757s of 12.071414948s, submitted: 21
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1310720 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0x71028/0xf0000, compress 0x0/0x0/0x0, omap 0x9eee, meta 0x1a26112), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0dc000/0x0/0x4ffc00000, data 0x71028/0xf0000, compress 0x0/0x0/0x0, omap 0x9eee, meta 0x1a26112), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000126 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000183 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000408 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000075
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000134 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000126 1 0.000253
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000255 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000449 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.815960 2 0.000063
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.816189 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.816223 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000099 1 0.000150
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.816342 2 0.000344
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.816847 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.817023 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000084
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1269760 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fe0d7000/0x0/0x4ffc00000, data 0x72d62/0xf3000, compress 0x0/0x0/0x0, omap 0xa179, meta 0x1a25e87), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 1253376 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.693610 5 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.690950 5 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003531 4 0.000130
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000086
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035801 1 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.039411 4 0.000285
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000071
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066930 1 0.000033
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 82 heartbeat osd_stat(store_statfs(0x4fe0d2000/0x0/0x4ffc00000, data 0x74815/0xf6000, compress 0x0/0x0/0x0, omap 0xa404, meta 0x1a25bfc), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.207877 1 0.000067
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 0.314435 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.005429 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.274948 1 0.000129
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.314554 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.008204 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000126 1 0.000216
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000057
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000561 1 0.000611
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000199 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 1 0.000505
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000741 3 0.000079
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001722 3 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1376256 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668710 data_alloc: 218103808 data_used: 4542
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012510 2 0.000218
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014391 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014021 2 0.000083
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014977 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002833 4 0.000273
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002298 4 0.000148
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1327104 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 84 handle_osd_map epochs [84,85], i have 85, src has [1,85]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1327104 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686729 data_alloc: 218103808 data_used: 4794
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fe0bd000/0x0/0x4ffc00000, data 0x7edc3/0x10b000, compress 0x0/0x0/0x0, omap 0xb346, meta 0x1a24cba), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.231353760s of 11.357475281s, submitted: 56
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1286144 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0bd000/0x0/0x4ffc00000, data 0x7edc3/0x10b000, compress 0x0/0x0/0x0, omap 0xb346, meta 0x1a24cba), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=0 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000155 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=0 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000079
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000129 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000298
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001117 2 0.000086
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.428262 2 0.000141
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.429802 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.002405 4 0.000347
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000191 1 0.000111
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 1155072 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126729 2 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1138688 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1130496 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708622 data_alloc: 218103808 data_used: 4794
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1130496 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0x85eea/0x119000, compress 0x0/0x0/0x0, omap 0xbec2, meta 0x1a2413e), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1105920 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 91 handle_osd_map epochs [92,93], i have 91, src has [1,93]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=0 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=0 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000211 1 0.000056
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000291 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0x85eea/0x119000, compress 0x0/0x0/0x0, omap 0xbec2, meta 0x1a2413e), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720441 data_alloc: 218103808 data_used: 4794
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 93 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.783453 2 0.000092
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.783782 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.783808 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000107 1 0.000148
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.008943 6 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005830 3 0.000257
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000217 1 0.000094
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.057229 1 0.000083
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 876544 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.224848747s of 10.569451332s, submitted: 87
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.943626 1 0.000081
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive 1.007072 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started 2.016063 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Reset 0.000104 1 0.000145
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000040
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001291 3 0.000047
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 851968 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 851968 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcefe000/0x0/0x4ffc00000, data 0x8e7ca/0x12a000, compress 0x0/0x0/0x0, omap 0xc8ee, meta 0x2bc3712), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.796672 2 0.000068
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.798081 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001914 4 0.000159
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 802816 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747833 data_alloc: 218103808 data_used: 4794
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 786432 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.139029 99 0.000466
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.146604 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 49.146655 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started 49.146724 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861665726s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 active pruub 157.961791992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] exit Reset 0.000547 1 0.000698
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] exit Start 0.000116 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 1810432 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.021002 3 0.000239
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.021199 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000064 1 0.000097
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000035
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x96e89/0x139000, compress 0x0/0x0/0x0, omap 0xd5a5, meta 0x2bc2a5b), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 1736704 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x96e89/0x139000, compress 0x0/0x0/0x0, omap 0xd5a5, meta 0x2bc2a5b), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007044 4 0.000061
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007150 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004814 5 0.000275
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000670 1 0.000079
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.057140 2 0.000055
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1712128 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.947837 1 0.000220
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010849 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.018031 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.018065 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993875504s) [0] async=[0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 active pruub 160.133941650s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] exit Reset 0.000456 1 0.000567
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] exit Start 0.000093 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9a35d/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1703936 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765529 data_alloc: 218103808 data_used: 4904
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9a35d/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.056840 6 0.000263
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000263 2 0.000065
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 DELETING pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.032254 2 0.000243
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032606 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.089623 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.063548088s of 10.183242798s, submitted: 32
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1605632 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 762039 data_alloc: 218103808 data_used: 5054
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0x9bd3e/0x141000, compress 0x0/0x0/0x0, omap 0xdd46, meta 0x2bc22ba), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 104 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 548864 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 540672 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fcee3000/0x0/0x4ffc00000, data 0x9f476/0x147000, compress 0x0/0x0/0x0, omap 0xe25c, meta 0x2bc1da4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19(unlocked)] enter Initial
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=0 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000113 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=0 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000024
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000171 1 0.000058
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000225 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 516096 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.923305 2 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.923582 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.923734 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000184 1 0.000368
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000029 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcede000/0x0/0x4ffc00000, data 0xa1012/0x14a000, compress 0x0/0x0/0x0, omap 0xe4e7, meta 0x2bc1b19), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 573440 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775601 data_alloc: 218103808 data_used: 5639
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.858556 5 0.000129
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004983 4 0.000192
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000109 1 0.000045
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.074592 1 0.000051
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.185366 1 0.000060
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 0.265193 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.123832 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000233 1 0.000290
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001443 2 0.000094
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000582 2 0.000125
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 401408 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013576 2 0.000089
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015694 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002124 3 0.000192
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 376832 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa6254/0x156000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 368640 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734509468s of 10.946245193s, submitted: 42
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 540672 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fced3000/0x0/0x4ffc00000, data 0xa7df8/0x159000, compress 0x0/0x0/0x0, omap 0xef13, meta 0x2bc10ed), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 38.961532 84 0.000369
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active 38.963950 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary 39.978968 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started 39.979246 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038787842s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 active pruub 169.396194458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] exit Reset 0.000096 1 0.000188
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 524288 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802415 data_alloc: 218103808 data_used: 5639
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/Stray 0.694344 3 0.000052
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started 0.694413 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] exit Reset 0.000303 1 0.000373
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] exit Start 0.000117 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000287
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 516096 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab415/0x15f000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002405 4 0.000138
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002610 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.005545 5 0.000338
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000073
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000385 1 0.000031
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063677 2 0.000046
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.956725 1 0.000110
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active 1.026769 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary 2.029428 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started 2.029658 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978566170s) [0] async=[0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 active pruub 178.060546875s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] exit Reset 0.000318 1 0.000459
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] exit Start 0.000053 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/Stray 1.171535 6 0.000265
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active+clean] exit Started/Primary/Active/Clean 70.295951 149 0.000525
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary/Active 70.302267 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary 71.301602 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started 71.301721 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705393791s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 active pruub 173.962097168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] exit Reset 0.000113 1 0.000174
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003453 2 0.000076
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 DELETING pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069232 2 0.000235
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete 0.072744 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started 1.244403 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 417792 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 0.855783 3 0.000075
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started 0.855852 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Reset 0.000098 1 0.000137
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000961 2 0.000053
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000057 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb1eb9/0x168000, compress 0x0/0x0/0x0, omap 0xfe55, meta 0x2bc01ab), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 401408 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805930 data_alloc: 218103808 data_used: 5639
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb1eb9/0x168000, compress 0x0/0x0/0x0, omap 0xfe55, meta 0x2bc01ab), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 70.088814 149 0.000618
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 70.092226 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 71.104840 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started 71.105520 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005606 3 0.000129
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006729 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912906647s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 active pruub 176.032379150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] exit Reset 0.000176 1 0.000233
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 385024 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.919067 5 0.000393
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000119 1 0.000115
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000314 1 0.000039
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042534 2 0.000050
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.017657 3 0.000064
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.017714 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.055496 1 0.000142
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017870 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary 2.024619 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started 2.024656 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901124954s) [0] async=[0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 active pruub 183.038589478s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000164 1 0.000205
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] exit Reset 0.000158 1 0.000220
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.006916 2 0.000066
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 303104 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008684 3 0.000095
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015748 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 1.019690 7 0.000117
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000103 1 0.000051
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 DELETING pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.047284 2 0.000196
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047459 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started 1.067214 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.266440 5 0.000324
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000127 1 0.000106
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000417 1 0.000037
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035804 2 0.000063
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 335872 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.895073891s of 10.020702362s, submitted: 59
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.738970 1 0.000151
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.042006 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.057773 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.057801 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224273682s) [1] async=[1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 active pruub 184.419616699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] exit Reset 0.000125 1 0.000169
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Started
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Start
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Started/Stray
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb6eec/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 319488 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.011468 7 0.000108
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000048
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 DELETING pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039298 2 0.000177
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039419 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.050932 0 0.000000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb8950/0x173000, compress 0x0/0x0/0x0, omap 0x10881, meta 0x2bbf77f), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1433600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806240 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1433600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1425408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1425408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1417216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1400832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807935 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1400832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1392640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1392640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1384448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.777606964s of 10.797446251s, submitted: 11
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1368064 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812763 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1359872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1359872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815176 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1343488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1343488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.021923065s of 10.029530525s, submitted: 4
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817587 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1318912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1318912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820000 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1310720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1302528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1294336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1294336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1286144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824824 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1286144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1253376 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.897191048s of 12.921946526s, submitted: 8
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1236992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1236992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1228800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827235 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1204224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1204224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1196032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1196032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1187840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836881 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1187840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.605541229s of 12.806389809s, submitted: 12
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1163264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841705 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1163264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1155072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1155072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841705 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1138688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1138688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1130496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844116 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1130496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.808049202s of 13.814610481s, submitted: 4
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1105920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848940 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1097728 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1097728 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1089536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1089536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853762 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1064960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858586 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 1032192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.827822685s of 13.856680870s, submitted: 12
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 1032192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 1024000 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 1024000 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860997 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 991232 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 991232 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863408 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 983040 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 974848 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 974848 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 966656 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865821 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.025310516s of 14.034622192s, submitted: 6
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 942080 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 942080 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 925696 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873062 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 917504 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 917504 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873062 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 901120 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 901120 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.012310982s of 11.024168015s, submitted: 6
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 876544 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 876544 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 868352 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875477 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 868352 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877890 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.895845413s of 10.908414841s, submitted: 6
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882718 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 819200 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 802816 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892374 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 778240 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 778240 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.920908928s of 10.948004723s, submitted: 14
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 770048 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899617 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 761856 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 737280 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 729088 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 729088 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 704512 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909263 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 696320 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 696320 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909263 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 679936 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.063706398s of 12.084164619s, submitted: 10
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 663552 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 655360 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 655360 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911674 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 614400 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 614400 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911674 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 606208 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 606208 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 589824 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.978821754s of 10.984546661s, submitted: 2
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 573440 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 573440 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914087 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 565248 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 565248 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 557056 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 557056 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918909 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.925371170s of 10.942327499s, submitted: 8
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 540672 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 516096 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923731 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 516096 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 491520 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 491520 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928557 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 475136 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 475136 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 458752 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 458752 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 442368 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 434176 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 434176 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 417792 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 417792 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 409600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 409600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 401408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 393216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 393216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 368640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 368640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 352256 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 352256 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 335872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 327680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 327680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 311296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 311296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 303104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 303104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 286720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 286720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 278528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 278528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 270336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 270336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 262144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 245760 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 245760 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 212992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 212992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 196608 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 196608 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 163840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 163840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 139264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 122880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 122880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 90112 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 90112 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 81920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 81920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 57344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 57344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 32768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 32768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 16384 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 16384 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 8192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 8192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 0 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 0 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 966656 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 966656 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 892928 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 892928 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002882335' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 794624 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 794624 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s#012Interval WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 316.272521973s of 316.286499023s, submitted: 8
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 819200 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 819200 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 802816 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 729088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 729088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 598016 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 598016 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 581632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 581632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: mgrc ms_handle_reset ms_handle_reset con 0x5564ed496000
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/894791725
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/894791725,v1:192.168.122.100:6801/894791725]
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: mgrc handle_mgr_configure stats_period=5
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.854583740s of 300.153442383s, submitted: 90
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000036s
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5637 writes, 24K keys, 5637 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5637 writes, 873 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1155072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.902282715s of 299.933044434s, submitted: 24
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'config diff' '{prefix=config diff}'
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'config show' '{prefix=config show}'
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 286720 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 2179072 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1900544 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:18 np0005589310 ceph-osd[88112]: do_command 'log dump' '{prefix=log dump}'
Jan 20 14:27:18 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 14:27:18 np0005589310 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 14:27:18 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755243641' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 14:27:19 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:19 np0005589310 podman[246361]: 2026-01-20 19:27:19.429299214 +0000 UTC m=+0.100243111 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 14:27:19 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287267857' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 14:27:19 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:20 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 14:27:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013419084' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 14:27:20 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:20 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 14:27:20 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 14:27:20 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861527620' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 14:27:21 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 14:27:21 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 14:27:21 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651649235' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 14:27:21 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 14:27:22 np0005589310 podman[246676]: 2026-01-20 19:27:22.016439713 +0000 UTC m=+0.088264579 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 14:27:22 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 20 14:27:22 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911391847' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 20 14:27:22 np0005589310 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 14:27:22 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 14:27:22 np0005589310 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 14:27:22 np0005589310 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 14:27:22 np0005589310 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:22.893+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 14:27:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 20 14:27:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859625886' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 20 14:27:23 np0005589310 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 20 14:27:23 np0005589310 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285311335' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038358 7 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.046756 7 0.000070
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.084935 3 0.000053
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.084972 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074727 2 0.000031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.074943 2 0.000035
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe084000/0x0/0x4ffc00000, data 0xb566e/0x146000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1302528 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 1.014564 1 0.000153
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.089642 2 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.117679 3 0.000094
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125604 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.118566 3 0.000050
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125450 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.119371 3 0.000148
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125169 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120096 3 0.000127
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.126372 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120069 3 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.126103 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120401 3 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125901 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120613 3 0.000078
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.122530 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120649 3 0.000614
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125649 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121010 3 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125302 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122465 3 0.000073
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125717 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122926 3 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125439 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122609 3 0.000686
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125040 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123005 3 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.124993 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123054 3 0.000053
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.123425 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121706 3 0.000859
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.124183 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123373 3 0.001231
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.123906 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000124 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000487 1 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000101 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000147 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000227 1 0.000273
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000218
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000217 1 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000035
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001970 2 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002202 2 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001486 2 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001363 2 0.000035
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.077669 5 0.000225
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.078403 5 0.000488
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.077875 5 0.000229
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.076118 5 0.000223
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.074394 5 0.000301
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.076301 5 0.000166
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.077339 5 0.000578
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.074457 5 0.000212
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.073436 5 0.001158
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.073785 5 0.000320
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.075900 5 0.000219
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.073444 5 0.000275
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.076401 5 0.000289
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.077128 5 0.000192
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.073685 5 0.000404
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.074048 5 0.000183
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.269928 4 0.000140
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.359731 5 0.000045
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000017 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 671744 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.442528 5 0.000038
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.442583 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067827 1 0.000090
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.427710 5 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.156294 1 0.000171
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000113 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.584173 5 0.000029
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.602897 5 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.602974 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066547 1 0.000466
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000032 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.663707 5 0.000087
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000031 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003135 1 0.000274
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.654671 5 0.000090
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000028 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003404 1 0.000161
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.477061 1 0.000365
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000777 1 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 55 heartbeat osd_stat(store_statfs(0x4fcede000/0x0/0x4ffc00000, data 0xb740f/0x14a000, compress 0x0/0x0/0x0, omap 0x86c7, meta 0x2bc7939), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.292979 2 0.000106
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.770450 1 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000634 1 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.030917 2 0.000109
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.802275 1 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000484 1 0.000086
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.048339 2 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.851225 1 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000439 1 0.000124
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.892049 1 0.000032
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.040787 2 0.000070
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000481 1 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.116025 1 0.000305
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.996921 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122547 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122576 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080907822s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078788757s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] exit Reset 0.000131 1 0.000201
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.148209 1 0.000178
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.996981 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122452 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122488 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080473900s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078704834s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] exit Reset 0.000205 1 0.000285
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.067833 1 0.000279
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.997083 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122274 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122309 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988234 2 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080650330s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079368591s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] exit Reset 0.000246 1 0.000334
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988690 2 0.000141
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.991206 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.027155 1 0.000179
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.995929 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.118480 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.118524 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079759598s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079399109s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989159 2 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.990651 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] exit Reset 0.000656 1 0.000694
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991074 2 0.000092
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993616 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004315 3 0.000175
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004834 3 0.000155
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+recovering+remapped rops=1 mbc={255={(0+1)=5}}] exit Started/Primary/Active/Recovering 0.031165 4 0.000159
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000209 1 0.000103
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] enter Started/Primary/Active/NotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] exit Started/Primary/Active/NotRecovering 0.000656 1 0.000242
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005892 4 0.000108
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005212 4 0.000096
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.083210 3 0.000045
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.081427 2 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000040 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.063482 1 0.000286
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.070942 4 0.000077
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001037 1 0.000082
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.058651 2 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.130765 4 0.000087
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000447 1 0.000221
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054018 2 0.000094
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.185593 4 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000839 1 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052077 2 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.238614 4 0.000044
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000793 1 0.000078
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 573440 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066028 2 0.000076
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.305599 4 0.000071
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000622 1 0.000085
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.068350 2 0.000090
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.374724 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000594 1 0.000106
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066269 2 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.441692 4 0.000445
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000494 1 0.000079
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061298 2 0.000038
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.503646 4 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000568 1 0.000108
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.543641 4 0.000081
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039367 2 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000760 1 0.000036
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059181 2 0.000045
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.603644 4 0.000188
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000093
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038279 2 0.000079
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.642478 4 0.000174
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.024208 2 0.000077
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.742976 1 0.000033
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000454 1 0.000088
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032500 2 0.000053
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895127 7 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895217 7 0.000028
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895246 7 0.000172
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.290757179s of 10.140542030s, submitted: 785
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875329 7 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875296 7 0.000031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875368 7 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875462 7 0.000019
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875570 7 0.000086
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875676 7 0.000075
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875736 7 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875783 7 0.000099
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875810 7 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875875 7 0.000034
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875925 7 0.000019
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875970 7 0.000089
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876020 7 0.000036
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876105 7 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876165 7 0.000029
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876208 7 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876203 7 0.000141
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876234 7 0.000092
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876279 7 0.000026
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876307 7 0.000110
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876371 7 0.000020
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876412 7 0.000140
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876484 7 0.000120
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876520 7 0.000021
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876578 7 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876655 7 0.000029
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876720 7 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876779 7 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876831 7 0.000020
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876935 7 0.000032
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877011 7 0.000021
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876899 7 0.000025
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876945 7 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876993 7 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877116 7 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877182 7 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877266 7 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877282 7 0.000084
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877258 7 0.000094
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876351 7 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876460 7 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876522 7 0.000076
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876599 7 0.000096
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876648 7 0.000046
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876686 7 0.000041
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876723 7 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876799 7 0.000020
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876888 7 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876943 7 0.000046
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876977 7 0.000021
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877057 7 0.000020
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877136 7 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874426 7 0.000033
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874489 7 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874511 7 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874592 7 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874552 7 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874609 7 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874655 7 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874707 7 0.000026
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874753 7 0.000020
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874815 7 0.000038
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874896 7 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874937 7 0.000022
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875012 7 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875016 7 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875097 7 0.000026
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875195 7 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875278 7 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875313 7 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875395 7 0.000046
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875438 7 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875506 7 0.000023
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875566 7 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875627 7 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875623 7 0.000029
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875687 7 0.000064
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875701 7 0.000018
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875777 7 0.000041
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.827224 7 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.461630 4 0.000104
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.300950 4 0.000215
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008881 1 0.000121
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.904067 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 3.925059 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015372 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.910651 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.931866 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022082 1 0.000091
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.897467 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.938448 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029607 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.924864 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.946022 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036691 1 0.000081
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.912041 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.949494 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044055 1 0.000067
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.919475 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.960664 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051389 1 0.000063
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.926916 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.963858 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058729 1 0.000074
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.934360 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.971902 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066631 1 0.000065
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.942342 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.982361 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080622 1 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.956448 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.995032 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.086510 1 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.962338 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.000852 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088245 1 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.964126 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.002297 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095207 1 0.000031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.971128 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.006259 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102565 1 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.978534 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.017090 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110029 1 0.000065
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.986051 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.020876 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118804 1 0.000045
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.994911 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.031601 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124943 1 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.001141 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.037869 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132050 1 0.000041
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.008251 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.047037 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139342 1 0.000034
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.015603 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.054910 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146716 1 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.022966 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.063160 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.154511 1 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.030811 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.068773 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161507 1 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.037824 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.072262 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168825 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.045184 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.082589 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.176371 1 0.000033
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.052779 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.074732 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.183410 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.059869 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.096126 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190624 1 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.067140 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.102283 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198020 1 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.074576 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.109274 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.205338 1 0.000053
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.081961 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.116043 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212684 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.089384 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.124129 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220064 1 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.096815 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.131155 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.227826 1 0.000034
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.104637 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.137307 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235362 1 0.000079
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.112249 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.146135 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.759197 1 0.000165
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022317 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148730 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148770 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.570116 1 0.000172
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active 2.019351 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053964615s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.078872681s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary 3.144408 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started 3.144458 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.814267 1 0.000109
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.021422 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.147090 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.147128 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] exit Reset 0.000284 1 0.000415
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054486275s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079605103s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] exit Reset 0.000341 1 0.000413
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.341688 1 0.000175
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022650 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148780 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148813 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054390907s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079986572s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] exit Reset 0.000121 1 0.000195
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054670334s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 active pruub 94.079673767s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.442018 1 0.000167
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022548 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148470 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148580 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] exit Reset 0.000867 1 0.000914
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053636551s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079513550s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] exit Reset 0.000113 1 0.000244
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.707401 1 0.000573
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020635 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.146095 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.146122 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.504197 1 0.000223
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.245703 1 0.000117
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020802 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.303518 1 0.000132
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.146547 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.019629 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.143552 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053482056s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079612732s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.143586 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022082 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.146575 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] exit Reset 0.000140 1 0.000185
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053318024s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079582214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.147501 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053416252s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079689026s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.147572 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] exit Start 0.000033 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] exit Reset 0.000097 1 0.000211
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.640774 1 0.000129
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020318 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.143767 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.144724 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.279072 1 0.000256
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053354263s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079818726s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020503 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.145513 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.145549 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053389549s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079895020s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] exit Reset 0.000093 1 0.000132
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] exit Reset 0.000060 1 0.000092
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053203583s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079658508s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] exit Reset 0.000254 1 0.000296
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] exit Start 0.000034 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.402410 1 0.000153
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020374 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.144575 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.144598 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053125381s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079887390s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] exit Reset 0.000416 1 0.000690
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] exit Reset 0.000077 1 0.000589
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] exit Start 0.000075 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030506 7 0.000140
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000064 1 0.000064
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.247082 4 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.124066 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.157366 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032604 7 0.000136
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033063 7 0.000098
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000074 1 0.000046
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000080 1 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034776 7 0.000085
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000050 1 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.252918 4 0.000061
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.129972 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.152699 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256713 4 0.000045
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.133643 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.156523 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.264079 4 0.000033
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.141055 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.173662 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.271682 4 0.000089
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.148818 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.188995 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.278775 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.155940 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.178532 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.286550 4 0.000063
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.163806 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.204269 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.293595 4 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.170910 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.204414 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300831 4 0.000034
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.178146 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.218706 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.308171 4 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.185484 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.220282 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.315710 4 0.000063
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.192125 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.233975 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.322976 4 0.000080
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.199504 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.241576 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.330405 4 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.206973 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.249138 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.337575 4 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.214238 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.255746 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.345031 4 0.000041
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.221730 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.259372 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.352326 4 0.000046
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.229064 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.266895 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.359658 4 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.236434 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.272463 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.367154 4 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.244022 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.283460 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.374202 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.251142 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.287794 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.381499 4 0.000038
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.258501 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.301072 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.388730 4 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.265749 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.302175 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.396060 4 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.273164 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.309710 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.403446 4 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.280623 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.323212 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.410763 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.285235 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.327925 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.418036 4 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.292557 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.332567 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.425413 4 0.000053
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.299961 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.343750 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.432715 4 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.307349 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.350926 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.440298 4 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.314914 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.354143 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.447429 4 0.000050
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.322089 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.366428 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.454824 4 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.329515 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.373770 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.462234 4 0.000041
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.336984 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.380126 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.469713 4 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.344520 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.371512 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.476913 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.351810 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.392273 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.484138 4 0.000054
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.359078 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.399190 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.491384 4 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.366381 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.408071 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.498757 4 0.000031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.373800 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.413781 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.506066 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.381127 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.407829 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.513415 4 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.388549 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.425636 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.520959 4 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.396212 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.436595 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.528318 4 0.000071
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.403655 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.447304 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.535502 4 0.000078
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.410861 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.449725 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1548288 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871146 data_alloc: 218103808 data_used: 16452
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.542786 4 0.000044
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.418220 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.445740 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.550268 4 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.425782 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.453174 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.557856 4 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.433451 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.471044 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.565065 4 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.440672 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.486895 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.572221 4 0.000070
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.447932 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.486620 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.579767 4 0.000058
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.455447 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.502370 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.587038 4 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.462781 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.509582 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.594260 4 0.000064
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.470015 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.508401 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.601443 4 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.477299 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.524102 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.660706 5 0.000124
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 3.487981 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.579958 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.667979 5 0.000189
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 2.129656 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.583602 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.682761 5 0.000184
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 1.983793 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.600137 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.484653 2 0.000214
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.484776 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.515344 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.512550 2 0.000184
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.512671 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.545320 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.542031 2 0.000107
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.542179 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.575316 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.575603 2 0.000168
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.575769 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.610610 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fcedf000/0x0/0x4ffc00000, data 0xbc571/0x14d000, compress 0x0/0x0/0x0, omap 0x9467, meta 0x2bc6b99), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098329 6 0.000297
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098183 6 0.000173
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098573 6 0.000105
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098909 6 0.000102
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098911 6 0.000079
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] exit Started/Stray 1.099347 6 0.000156
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098365 6 0.000252
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099929 6 0.000115
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099627 6 0.000133
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099391 6 0.000114
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098956 6 0.000147
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.100525 6 0.000574
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000859 1 0.000106
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002129 1 0.000063
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002329 2 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002520 2 0.000108
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002743 2 0.000021
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003248 2 0.000098
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003356 2 0.000031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003408 2 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003512 2 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003582 2 0.000101
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003560 2 0.000588
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003513 2 0.000291
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092262 3 0.000423
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093361 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.191858 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.157671 3 0.000254
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159862 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.258097 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1589248 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.209272 2 0.000267
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211660 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.310632 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.231227 2 0.000320
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233851 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.332518 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.290337 2 0.000352
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.293231 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392177 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.356413 2 0.000273
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete 0.359720 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started 1.459164 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.423096 2 0.000143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.426533 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.525034 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.452315 2 0.000118
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.455797 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.555779 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.511561 2 0.000119
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.515132 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.614826 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.570774 2 0.000118
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.574419 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.673911 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.637705 2 0.000205
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.641400 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.740439 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.689200 2 0.000114
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.692766 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.793400 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1589248 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1687552 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fcef5000/0x0/0x4ffc00000, data 0xbd21a/0x135000, compress 0x0/0x0/0x0, omap 0x96ea, meta 0x2bc6916), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1654784 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1646592 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664579 data_alloc: 218103808 data_used: 12212
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1638400 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 10.216760 14 0.000140
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.312217 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.325834 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.325868 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707896233s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897071838s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] exit Reset 0.000110 1 0.000166
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.932058 11 0.000230
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310878 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.327299 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.327338 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707865715s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897239685s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] exit Reset 0.000159 1 0.000223
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.864652 11 0.000122
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310615 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.329051 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.329080 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707477570s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897384644s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] exit Reset 0.000054 1 0.000124
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.641348 11 0.000176
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310128 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.330560 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.330608 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707047462s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897460938s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] exit Reset 0.000220 1 0.000346
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] exit Start 0.000027 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1630208 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000144 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000070
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000250 1 0.000065
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601312 6 0.000106
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601825 6 0.000142
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000105 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000035
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.602454 6 0.000109
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000807 2 0.000060
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601509 6 0.000129
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.001191 2 0.000030
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.072709 3 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.072773 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000164 1 0.000126
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.131816 3 0.000035
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.131850 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000088
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.198173 3 0.000065
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.198209 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000128 1 0.000113
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.129864 2 0.000265
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.130075 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.805350 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.152780 2 0.000153
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.152957 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.886165 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.152002 2 0.000236
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.152219 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.952024 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.420158 3 0.000025
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.420184 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000110 1 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.024929 2 0.000280
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.025145 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.047252 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1572864 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.022405 2 0.000109
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.023990 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.023558 2 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.024722 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002404 3 0.000211
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000105 1 0.000090
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004288 3 0.000412
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007845 3 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006223 3 0.000055
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.262984 1 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 385024 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0xc2707/0x13d000, compress 0x0/0x0/0x0, omap 0xa3ce, meta 0x2bc5c32), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 344064 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 669155 data_alloc: 218103808 data_used: 12212
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.939429283s of 11.293741226s, submitted: 284
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 344064 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcee8000/0x0/0x4ffc00000, data 0xc2ab3/0x13e000, compress 0x0/0x0/0x0, omap 0xa3ce, meta 0x2bc5c32), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 303104 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 286720 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 286720 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 16.479237 24 0.000205
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 17.588266 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 18.604017 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 18.604103 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 15.984634 21 0.000534
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430742264s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897323608s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] exit Reset 0.000192 1 0.000346
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] exit Start 0.000044 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 17.587705 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 18.605747 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 18.605821 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430679321s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897682190s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] exit Reset 0.000171 1 0.000551
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] exit Start 0.000119 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0xc47ed/0x141000, compress 0x0/0x0/0x0, omap 0xa650, meta 0x2bc59b0), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 278528 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677806 data_alloc: 218103808 data_used: 12212
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 1.025293 7 0.000319
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 1.026610 7 0.000179
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071377 2 0.000374
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.071459 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000092 1 0.000098
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.196039 2 0.000105
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.196140 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000212 1 0.000234
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 DELETING pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.130071 2 0.000266
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.130234 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.227281 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 DELETING pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.027069 2 0.000221
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.027378 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.250292 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 270336 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.535138 43 0.001415
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541382 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541985 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.542176 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465168953s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] exit Reset 0.000093 1 0.000158
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.536687 43 0.000208
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541828 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541887 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541915 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463762283s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653617859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] exit Reset 0.000057 1 0.000123
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 64 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.526382 43 0.000261
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.540643 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541685 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541788 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473500252s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] exit Reset 0.000143 1 0.000242
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.532734 43 0.000280
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541615 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541706 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541742 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467473984s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.658554077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] exit Reset 0.000113 1 0.000264
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 270336 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.332648 3 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.332707 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.333137 3 0.000042
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.331953 3 0.000096
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.332012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000095 1 0.000139
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.333362 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.334859 3 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.334898 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000158 1 0.000224
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000015 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000207 1 0.000229
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000228 1 0.000442
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000117
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000081 1 0.000082
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000143 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000136 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000593 1 0.000624
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000011 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000090
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 262144 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.047315 4 0.000110
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.047549 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048207 4 0.000180
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048484 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048584 4 0.000107
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048899 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048574 4 0.000307
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048998 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005287 5 0.000253
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000092 1 0.000071
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006281 5 0.000397
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.005244 5 0.000965
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005922 5 0.000257
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000333 1 0.000025
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.044018 1 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043689 2 0.000072
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000611 1 0.000115
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032571 2 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.077289 1 0.000018
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000462 1 0.000056
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059428 2 0.000061
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.137105 1 0.000120
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000274 1 0.000032
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045446 2 0.000044
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 237568 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.919329 1 0.000065
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003055 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.050642 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.050707 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.003064156s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577781677s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] exit Reset 0.000105 1 0.000197
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] exit Start 0.000017 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.859335 1 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003032 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.051564 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.051602 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002538681s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577713013s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] exit Reset 0.000182 1 0.000707
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] exit Start 0.000049 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.815168 1 0.000103
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004156 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.053096 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.053147 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001266479s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Reset 0.000154 1 0.000238
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Start 0.000033 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.954672 1 0.000213
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004342 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.053403 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.053456 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000796318s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Reset 0.000134 1 0.000241
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Start 0.000043 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcedf000/0x0/0x4ffc00000, data 0xcb793/0x14b000, compress 0x0/0x0/0x0, omap 0xb0f7, meta 0x2bc4f09), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 278528 heap: 80814080 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 685971 data_alloc: 218103808 data_used: 12676
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012222 6 0.000311
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012818 6 0.000157
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000221 1 0.000039
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014722 6 0.000143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014301 6 0.000180
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000440 1 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000935 2 0.000088
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001383 2 0.000029
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069710 3 0.000198
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070019 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082354 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.113842 3 0.000157
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.114333 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.127228 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dcfbe800 session 0x5614da7f7a40
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dce11400 session 0x5614dcd66e00
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614db4e1800 session 0x5614da809a40
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.142979 2 0.000402
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.143982 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.158822 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.201534 2 0.000266
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.202967 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.217373 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.677583694s of 10.201479912s, submitted: 85
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1007616 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dd22ec00 session 0x5614dcd0e380
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dc986400 session 0x5614dc2db340
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dbb6c800 session 0x5614daf4f6c0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xcea83/0x14b000, compress 0x0/0x0/0x0, omap 0xb627, meta 0x2bc49d9), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1409024 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1744896 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1720320 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 649669 data_alloc: 218103808 data_used: 11846
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xd1f75/0x151000, compress 0x0/0x0/0x0, omap 0xbb0e, meta 0x2bc44f2), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1720320 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.534300 65 0.000257
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.543722 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.543771 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.543806 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.466033936s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 active pruub 117.657966614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] exit Reset 0.000180 1 0.000135
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active+clean] exit Started/Primary/Active/Clean 37.534183 65 0.000241
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary/Active 37.543151 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary 37.543198 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started 37.543227 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465965271s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 active pruub 117.658271790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] exit Reset 0.000109 1 0.000156
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1736704 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 0.520754 3 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started 0.520825 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Reset 0.000132 1 0.000189
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000050
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.521397 3 0.000136
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.521467 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000147 1 0.000222
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000050 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000191
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 1867776 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 72 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993032 4 0.000102
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993173 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992751 4 0.000154
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.992967 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 31.810121 58 0.000226
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 31.828012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 32.847414 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 32.847455 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190241814s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 active pruub 116.897994995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] exit Reset 0.000059 1 0.000106
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.002994 5 0.000273
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000143 1 0.000138
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000414 1 0.000072
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.005830 5 0.000272
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045743 2 0.000036
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.043349 1 0.000094
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000592 1 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052127 2 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1859584 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.906353 1 0.000105
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008493 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001491 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001614 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997115135s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 active pruub 124.712516785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] exit Reset 0.000157 1 0.000219
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.959542 1 0.000100
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009121 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 2.002313 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 2.002340 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993810654s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 active pruub 124.709548950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] exit Reset 0.000140 1 0.000193
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016025 7 0.000098
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000101 1 0.000151
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004090 1 0.000027
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004248 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.020388 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1859584 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662224 data_alloc: 218103808 data_used: 11846
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.055232 6 0.000105
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000189 1 0.000066
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055756 6 0.000188
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000880 1 0.000487
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.054748 3 0.000352
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.055026 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.110313 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.105306 3 0.000229
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.106336 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.162332 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1835008 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.866786003s of 10.976214409s, submitted: 53
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1818624 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1785856 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 642583 data_alloc: 218103808 data_used: 11318
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.626592 62 0.000245
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.631950 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 38.625637 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 38.625770 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.373511314s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 active pruub 127.006271362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] exit Reset 0.000755 1 0.001078
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] exit Start 0.000287 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.227669 7 0.000471
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000095 1 0.000075
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 DELETING pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002485 1 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002632 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.230660 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=0 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000194 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=0 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000051 1 0.000107
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000355 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000218 1 0.000574
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001040 2 0.000124
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010165 2 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.011527 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002210 3 0.000181
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000135 1 0.000085
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010246 3 0.000174
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1744896 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 661565 data_alloc: 218103808 data_used: 11318
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcec4000/0x0/0x4ffc00000, data 0xe16a8/0x168000, compress 0x0/0x0/0x0, omap 0xd25d, meta 0x2bc2da3), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.854744911s of 10.015905380s, submitted: 22
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1687552 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.519082 94 0.000653
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.524615 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 54.524897 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 54.524952 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481973648s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 active pruub 133.654129028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] exit Reset 0.000167 1 0.000221
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active+clean] exit Started/Primary/Active/Clean 54.514614 94 0.000360
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary/Active 54.523269 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary 54.523331 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started 54.523365 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485882759s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 active pruub 133.658874512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] exit Reset 0.000153 1 0.000262
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] exit Start 0.000069 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1679360 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.822653 3 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.822710 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000156 1 0.000217
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 0.821990 3 0.000171
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started 0.822128 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000062 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Reset 0.000049 1 0.000070
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Start 0.000013 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000051
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000322
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675528 data_alloc: 218103808 data_used: 11318
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989848 4 0.000156
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.990161 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991113 4 0.000091
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.991261 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1671168 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.700619 5 0.000391
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000074 1 0.000057
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000689 1 0.000017
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.701437 5 0.000273
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035448 2 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.034448 1 0.000026
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000459 1 0.000049
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066522 2 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.271201 1 0.000125
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008334 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.998564 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.998698 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.692206383s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 active pruub 142.686187744s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] exit Reset 0.000694 1 0.000672
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.205209 1 0.000127
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008322 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 1.999598 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 1.999626 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693123817s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 active pruub 142.688095093s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] exit Reset 0.000094 1 0.000121
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 83 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1638400 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fceb4000/0x0/0x4ffc00000, data 0xe838f/0x174000, compress 0x0/0x0/0x0, omap 0xdca5, meta 0x2bc235b), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023649 7 0.000259
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000083
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.024807 7 0.000195
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000062
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 DELETING pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046881 2 0.000269
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047069 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070829 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 DELETING pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112093 2 0.000251
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112262 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.137116 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662153 data_alloc: 218103808 data_used: 10900
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=0 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=0 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000040
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000169 1 0.000092
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000666 2 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.167957 2 0.000079
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.168896 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002107 4 0.000369
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000138 1 0.000091
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000048 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067599 2 0.000253
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fcead000/0x0/0x4ffc00000, data 0xed40b/0x17b000, compress 0x0/0x0/0x0, omap 0xe58c, meta 0x2bc1a74), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1507328 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1499136 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.464494705s of 10.600705147s, submitted: 52
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1499136 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1490944 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1482752 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681384 data_alloc: 218103808 data_used: 10900
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1474560 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fcea7000/0x0/0x4ffc00000, data 0xf0e7f/0x181000, compress 0x0/0x0/0x0, omap 0xea66, meta 0x2bc159a), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fcea7000/0x0/0x4ffc00000, data 0xf0e7f/0x181000, compress 0x0/0x0/0x0, omap 0xea66, meta 0x2bc159a), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 417792 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694316 data_alloc: 218103808 data_used: 11737
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 360448 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 91 handle_osd_map epochs [92,93], i have 91, src has [1,93]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fce9a000/0x0/0x4ffc00000, data 0xf973e/0x190000, compress 0x0/0x0/0x0, omap 0xf4a3, meta 0x2bc0b5d), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.513738632s of 10.163047791s, submitted: 33
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708350 data_alloc: 218103808 data_used: 11737
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 94 handle_osd_map epochs [95,96], i have 94, src has [1,96]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xfe7e0/0x199000, compress 0x0/0x0/0x0, omap 0xfa0a, meta 0x2bc05f6), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 360448 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=0 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=0 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000024
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000038
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000191 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.122774 2 0.000048
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.123095 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.123133 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000455 1 0.000621
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000061 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 327680 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.009079 6 0.000238
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003586 3 0.000083
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000219 1 0.000043
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028572 1 0.000109
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 311296 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.985326 1 0.000069
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.017831 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.027097 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000106 1 0.000157
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002738 3 0.000052
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0x105477/0x1a6000, compress 0x0/0x0/0x0, omap 0x1040e, meta 0x2bbfbf2), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 270336 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741966 data_alloc: 218103808 data_used: 12351
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004844 2 0.000098
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007704 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003662 4 0.000096
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 262144 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 253952 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.663535118s of 10.045524597s, submitted: 86
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0x10be30/0x1b2000, compress 0x0/0x0/0x0, omap 0x10e32, meta 0x2bbf1ce), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 188416 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 180224 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 180224 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751680 data_alloc: 218103808 data_used: 12351
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0x10be30/0x1b2000, compress 0x0/0x0/0x0, omap 0x10e32, meta 0x2bbf1ce), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 131072 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 131072 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 105 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 122880 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 122880 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 114688 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765845 data_alloc: 218103808 data_used: 12351
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fce6f000/0x0/0x4ffc00000, data 0x111104/0x1bb000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 106496 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0x112b85/0x1be000, compress 0x0/0x0/0x0, omap 0x11876, meta 0x2bbe78a), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 65536 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0x114742/0x1c1000, compress 0x0/0x0/0x0, omap 0x11b0c, meta 0x2bbe4f4), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 49152 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029823303s of 11.274053574s, submitted: 21
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 8192 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 109 handle_osd_map epochs [110,111], i have 109, src has [1,111]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 32768 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782787 data_alloc: 218103808 data_used: 12628
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0x117d4a/0x1c7000, compress 0x0/0x0/0x0, omap 0x11da4, meta 0x2bbe25c), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 24576 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1048576 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1040384 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fce59000/0x0/0x4ffc00000, data 0x11b367/0x1cd000, compress 0x0/0x0/0x0, omap 0x12257, meta 0x2bbdda9), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795501 data_alloc: 218103808 data_used: 12628
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 114 handle_osd_map epochs [115,117], i have 114, src has [1,117]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 950272 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f(unlocked)] enter Initial
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=0 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000147 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=0 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000213 1 0.000124
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000061 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000300 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 933888 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.019044 2 0.000099
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.019444 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.019480 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000085 1 0.000130
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.645082474s of 10.000852585s, submitted: 20
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 966656 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fce4b000/0x0/0x4ffc00000, data 0x125608/0x1df000, compress 0x0/0x0/0x0, omap 0x12cd7, meta 0x2bbd329), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.273404 5 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002853 4 0.000121
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000072 1 0.000037
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.036220 1 0.000023
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829231 data_alloc: 218103808 data_used: 13771
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.737561 1 0.000059
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.776815 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.050251 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000118 1 0.000160
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001417 2 0.000047
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000631 2 0.000125
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 868352 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004884 2 0.000144
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007074 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002333 4 0.000269
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce41000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 917504 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840818 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 909312 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 909312 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840818 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.837522507s of 13.887688637s, submitted: 30
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 884736 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 884736 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 876544 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845644 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 868352 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 860160 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 851968 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 851968 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 827392 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852885 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 827392 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 819200 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.002383232s of 10.074170113s, submitted: 12
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 819200 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 811008 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 802816 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857711 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 802816 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860126 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 778240 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 778240 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.908032417s of 10.916373253s, submitted: 4
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 770048 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 761856 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864952 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 753664 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 745472 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 745472 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 737280 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 737280 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869780 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 712704 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 712704 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874604 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 704512 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.685772896s of 12.770541191s, submitted: 12
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 704512 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 696320 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 688128 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 688128 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877017 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 671744 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 671744 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881839 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 663552 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 647168 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 638976 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 638976 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.860413551s of 12.898897171s, submitted: 10
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 630784 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889074 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 630784 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 622592 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 614400 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 606208 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898724 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 589824 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 589824 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 581632 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905961 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 573440 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.782507896s of 12.041462898s, submitted: 16
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 565248 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 557056 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 557056 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 548864 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910787 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 548864 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 540672 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 524288 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 524288 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 516096 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915611 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 516096 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.002562523s of 10.019463539s, submitted: 8
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 499712 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 499712 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 491520 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 483328 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920435 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 483328 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 466944 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920435 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 466944 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.074139595s of 10.080884933s, submitted: 4
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 450560 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925263 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 450560 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 442368 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 434176 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 425984 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927674 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 425984 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993028641s of 10.005455971s, submitted: 6
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 409600 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 401408 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 393216 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 393216 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930085 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 385024 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 385024 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 376832 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 376832 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 360448 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934907 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 352256 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 352256 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 344064 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.441165924s of 11.458217621s, submitted: 6
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 344064 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937318 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 327680 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 327680 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 303104 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 286720 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 270336 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 270336 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 253952 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 253952 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 237568 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 237568 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.850078583s of 22.913881302s, submitted: 8
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 229376 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 221184 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 212992 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 204800 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946964 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 204800 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 196608 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 196608 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 188416 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 180224 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949377 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 172032 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 172032 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.956479073s of 10.965095520s, submitted: 4
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 155648 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 155648 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 147456 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951790 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 147456 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 139264 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 139264 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954203 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 122880 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 114688 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 106496 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 106496 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954203 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 98304 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 90112 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.974168777s of 15.981106758s, submitted: 4
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 81920 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 73728 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 73728 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956618 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 65536 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 65536 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963855 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 49152 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 49152 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966266 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 32768 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.788084984s of 13.944467545s, submitted: 10
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 32768 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971088 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 0 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 0 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1032192 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1024000 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1024000 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975910 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1015808 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1015808 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.056145668s of 11.071456909s, submitted: 8
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1007616 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1007616 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980734 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 991232 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 991232 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983147 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 974848 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 974848 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983147 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 950272 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.651216507s of 15.828714371s, submitted: 6
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 950272 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 925696 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987973 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 925696 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 917504 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 917504 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 868352 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 868352 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992795 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 860160 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 860160 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 851968 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.020460129s of 10.043588638s, submitted: 10
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 851968 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 827392 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000030 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 827392 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1859584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1859584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1851392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1851392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 1826816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 1818624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1802240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1802240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1785856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 1695744 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 1695744 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1630208 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1630208 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1622016 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1622016 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1613824 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1613824 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1605632 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1605632 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1589248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1581056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1556480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1556480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1540096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1531904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1515520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1515520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1507328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1507328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1490944 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1490944 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1466368 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1466368 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1458176 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1458176 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 1441792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 1441792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1433600 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1433600 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1417216 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1409024 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1409024 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1400832 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1400832 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1384448 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1384448 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1376256 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1376256 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1368064 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1368064 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1351680 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1351680 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1335296 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1335296 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1318912 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1318912 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1310720 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1310720 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1294336 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1294336 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1277952 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1277952 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1261568 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1261568 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1245184 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1245184 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1228800 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1228800 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 1212416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 1212416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 1204224 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s#012Interval WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1122304 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1105920 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1105920 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 1097728 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 1097728 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 1089536 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 1081344 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 1081344 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 1073152 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 1073152 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 1064960 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 1064960 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 1056768 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 1056768 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1032192 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1032192 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1024000 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 257.986907959s of 257.997436523s, submitted: 6
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 933888 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 933888 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 917504 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 917504 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 909312 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 909312 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 884736 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 884736 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 876544 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 876544 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 860160 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 860160 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 843776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 843776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 835584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 835584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 802816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 802816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 794624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 794624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 786432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 786432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 696320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5614daa2c000
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/894791725
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/894791725,v1:192.168.122.100:6801/894791725]
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 ms_handle_reset con 0x5614da393c00 session 0x5614dbb5c8c0
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.885894775s of 300.123291016s, submitted: 90
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 565248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 565248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000074s
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 458752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 458752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7128 writes, 29K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7128 writes, 1427 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.933715820s of 299.960906982s, submitted: 22
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 688128 heap: 87105536 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 14:27:23 np0005589310 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
